00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3691 00:00:00.002 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3292 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.114 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.116 The recommended git tool is: git 00:00:00.116 using credential 00000000-0000-0000-0000-000000000002 00:00:00.118 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.157 Fetching changes from the remote Git repository 00:00:00.159 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.187 Using shallow fetch with depth 1 00:00:00.187 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.187 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.231 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.734 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.744 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.755 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:05.755 > git config core.sparsecheckout # timeout=10 00:00:05.764 > git read-tree -mu HEAD # timeout=10 00:00:05.779 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:05.805 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:05.805 > git rev-list --no-walk e33ef006ccd688d2b66122cd0240b989d53c9017 # timeout=10 00:00:05.900 [Pipeline] Start of Pipeline 00:00:05.910 [Pipeline] library 00:00:05.911 Loading library shm_lib@master 00:00:05.911 Library shm_lib@master is cached. Copying from home. 00:00:05.927 [Pipeline] node 00:00:05.933 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.934 [Pipeline] { 00:00:05.941 [Pipeline] catchError 00:00:05.942 [Pipeline] { 00:00:05.951 [Pipeline] wrap 00:00:05.957 [Pipeline] { 00:00:05.963 [Pipeline] stage 00:00:05.964 [Pipeline] { (Prologue) 00:00:06.123 [Pipeline] sh 00:00:06.401 + logger -p user.info -t JENKINS-CI 00:00:06.419 [Pipeline] echo 00:00:06.421 Node: GP6 00:00:06.427 [Pipeline] sh 00:00:06.726 [Pipeline] setCustomBuildProperty 00:00:06.738 [Pipeline] echo 00:00:06.739 Cleanup processes 00:00:06.745 [Pipeline] sh 00:00:07.044 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.044 3531541 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.057 [Pipeline] sh 00:00:07.342 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.342 ++ grep -v 'sudo pgrep' 00:00:07.342 ++ awk '{print $1}' 00:00:07.342 + sudo kill -9 00:00:07.342 + true 00:00:07.357 [Pipeline] cleanWs 00:00:07.367 [WS-CLEANUP] Deleting project workspace... 00:00:07.367 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.374 [WS-CLEANUP] done 00:00:07.378 [Pipeline] setCustomBuildProperty 00:00:07.392 [Pipeline] sh 00:00:07.676 + sudo git config --global --replace-all safe.directory '*' 00:00:07.755 [Pipeline] httpRequest 00:00:07.788 [Pipeline] echo 00:00:07.789 Sorcerer 10.211.164.101 is alive 00:00:07.797 [Pipeline] httpRequest 00:00:07.802 HttpMethod: GET 00:00:07.802 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:07.803 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:07.825 Response Code: HTTP/1.1 200 OK 00:00:07.825 Success: Status code 200 is in the accepted range: 200,404 00:00:07.826 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:34.046 [Pipeline] sh 00:00:34.332 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:34.346 [Pipeline] httpRequest 00:00:34.375 [Pipeline] echo 00:00:34.377 Sorcerer 10.211.164.101 is alive 00:00:34.384 [Pipeline] httpRequest 00:00:34.388 HttpMethod: GET 00:00:34.389 URL: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:34.389 Sending request to url: http://10.211.164.101/packages/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:00:34.408 Response Code: HTTP/1.1 200 OK 00:00:34.409 Success: Status code 200 is in the accepted range: 200,404 00:00:34.409 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:01:12.422 [Pipeline] sh 00:01:12.706 + tar --no-same-owner -xf spdk_78cbcfdde1ea721461a0377ef7e908b0636460ea.tar.gz 00:01:16.005 [Pipeline] sh 00:01:16.291 + git -C spdk log --oneline -n5 00:01:16.291 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:16.291 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:01:16.291 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:01:16.291 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:01:16.291 084afa904 util: copy errno before calling stdlib's functions 00:01:16.310 [Pipeline] withCredentials 00:01:16.323 > git --version # timeout=10 00:01:16.338 > git --version # 'git version 2.39.2' 00:01:16.362 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:16.365 [Pipeline] { 00:01:16.376 [Pipeline] retry 00:01:16.378 [Pipeline] { 00:01:16.404 [Pipeline] sh 00:01:16.889 + git ls-remote http://dpdk.org/git/dpdk main 00:01:16.906 [Pipeline] } 00:01:16.931 [Pipeline] // retry 00:01:16.937 [Pipeline] } 00:01:16.957 [Pipeline] // withCredentials 00:01:16.965 [Pipeline] httpRequest 00:01:16.983 [Pipeline] echo 00:01:16.984 Sorcerer 10.211.164.101 is alive 00:01:16.992 [Pipeline] httpRequest 00:01:16.996 HttpMethod: GET 00:01:16.997 URL: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:16.997 Sending request to url: http://10.211.164.101/packages/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:17.006 Response Code: HTTP/1.1 200 OK 00:01:17.007 Success: Status code 200 is in the accepted range: 200,404 00:01:17.007 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:21.113 [Pipeline] sh 00:01:21.395 + tar --no-same-owner -xf dpdk_82c47f005b9a0a1e3a649664b7713443d18abe43.tar.gz 00:01:22.799 [Pipeline] sh 00:01:23.078 + git -C dpdk log --oneline -n5 00:01:23.078 82c47f005b version: 24.07-rc3 00:01:23.078 d9d1be537e doc: remove reference to mbuf pkt field 00:01:23.078 52c7393a03 doc: set required MinGW version in Windows guide 00:01:23.078 92439dc9ac dts: improve starting and stopping interactive shells 00:01:23.078 2b648cd4e4 dts: add context manager for interactive shells 00:01:23.090 [Pipeline] } 00:01:23.108 [Pipeline] // stage 00:01:23.118 [Pipeline] stage 00:01:23.121 [Pipeline] { (Prepare) 00:01:23.142 [Pipeline] writeFile 00:01:23.160 [Pipeline] sh 00:01:23.447 + logger -p user.info -t JENKINS-CI 00:01:23.461 [Pipeline] sh 00:01:23.745 + logger -p user.info -t JENKINS-CI 00:01:23.757 [Pipeline] sh 00:01:24.041 + cat autorun-spdk.conf 00:01:24.041 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.041 SPDK_TEST_NVMF=1 00:01:24.041 SPDK_TEST_NVME_CLI=1 00:01:24.041 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.041 SPDK_TEST_NVMF_NICS=e810 00:01:24.041 SPDK_TEST_VFIOUSER=1 00:01:24.041 SPDK_RUN_UBSAN=1 00:01:24.041 NET_TYPE=phy 00:01:24.041 SPDK_TEST_NATIVE_DPDK=main 00:01:24.041 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.049 RUN_NIGHTLY=1 00:01:24.055 [Pipeline] readFile 00:01:24.082 [Pipeline] withEnv 00:01:24.085 [Pipeline] { 00:01:24.101 [Pipeline] sh 00:01:24.388 + set -ex 00:01:24.388 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:24.388 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:24.388 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:24.388 ++ SPDK_TEST_NVMF=1 00:01:24.388 ++ SPDK_TEST_NVME_CLI=1 00:01:24.388 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:24.388 ++ SPDK_TEST_NVMF_NICS=e810 00:01:24.388 ++ SPDK_TEST_VFIOUSER=1 00:01:24.388 ++ SPDK_RUN_UBSAN=1 00:01:24.388 ++ NET_TYPE=phy 00:01:24.388 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:24.388 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:24.388 ++ RUN_NIGHTLY=1 00:01:24.388 + case $SPDK_TEST_NVMF_NICS in 00:01:24.388 + DRIVERS=ice 00:01:24.388 + [[ tcp == \r\d\m\a ]] 00:01:24.388 + [[ -n ice ]] 00:01:24.388 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:24.388 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:24.388 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:24.388 rmmod: ERROR: Module irdma is not currently loaded 00:01:24.388 rmmod: ERROR: Module i40iw is not currently loaded 00:01:24.388 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:24.388 + true 00:01:24.388 + for D in $DRIVERS 00:01:24.388 + sudo modprobe ice 00:01:24.388 + exit 0 00:01:24.397 [Pipeline] } 00:01:24.416 [Pipeline] // withEnv 00:01:24.422 [Pipeline] } 00:01:24.442 [Pipeline] // stage 00:01:24.451 [Pipeline] catchError 00:01:24.453 [Pipeline] { 00:01:24.470 [Pipeline] timeout 00:01:24.471 Timeout set to expire in 50 min 00:01:24.472 [Pipeline] { 00:01:24.490 [Pipeline] stage 00:01:24.491 [Pipeline] { (Tests) 00:01:24.502 [Pipeline] sh 00:01:24.787 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.787 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.787 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.787 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:24.787 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.787 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.787 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:24.787 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.787 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.787 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.787 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:24.787 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.787 + source /etc/os-release 00:01:24.787 ++ NAME='Fedora Linux' 00:01:24.787 ++ VERSION='38 (Cloud Edition)' 00:01:24.787 ++ ID=fedora 00:01:24.787 ++ VERSION_ID=38 00:01:24.787 ++ VERSION_CODENAME= 00:01:24.787 ++ PLATFORM_ID=platform:f38 00:01:24.787 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.787 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.787 ++ LOGO=fedora-logo-icon 00:01:24.787 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.787 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.787 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.787 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.787 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.787 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.787 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.787 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.787 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.787 ++ SUPPORT_END=2024-05-14 00:01:24.787 ++ VARIANT='Cloud Edition' 00:01:24.787 ++ VARIANT_ID=cloud 00:01:24.787 + uname -a 00:01:24.787 Linux spdk-gp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.787 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:25.721 Hugepages 00:01:25.721 node hugesize free / total 00:01:25.721 node0 1048576kB 0 / 0 00:01:25.721 node0 2048kB 0 / 0 00:01:25.721 node1 1048576kB 0 / 0 00:01:25.721 node1 2048kB 0 / 0 00:01:25.721 00:01:25.721 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.721 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:25.721 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:25.980 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:25.980 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:25.980 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:25.980 + rm -f /tmp/spdk-ld-path 00:01:25.980 + source autorun-spdk.conf 00:01:25.980 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.980 ++ SPDK_TEST_NVMF=1 00:01:25.980 ++ SPDK_TEST_NVME_CLI=1 00:01:25.980 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.980 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.980 ++ SPDK_TEST_VFIOUSER=1 00:01:25.980 ++ SPDK_RUN_UBSAN=1 00:01:25.980 ++ NET_TYPE=phy 00:01:25.980 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:25.980 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.980 ++ RUN_NIGHTLY=1 00:01:25.980 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.980 + [[ -n '' ]] 00:01:25.980 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.980 + for M in /var/spdk/build-*-manifest.txt 00:01:25.980 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.980 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.980 + for M in /var/spdk/build-*-manifest.txt 00:01:25.980 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.980 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.980 ++ uname 00:01:25.980 + [[ Linux == \L\i\n\u\x ]] 00:01:25.980 + sudo dmesg -T 00:01:25.980 + sudo dmesg --clear 00:01:25.980 + dmesg_pid=3532270 00:01:25.980 + [[ Fedora Linux == FreeBSD ]] 00:01:25.980 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.980 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.980 + sudo dmesg -Tw 00:01:25.980 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.980 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:25.980 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:25.980 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.980 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.980 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.980 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.980 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.980 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.980 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.980 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.980 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.980 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.980 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.980 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.980 Test configuration: 00:01:25.980 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.980 SPDK_TEST_NVMF=1 00:01:25.980 SPDK_TEST_NVME_CLI=1 00:01:25.980 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.980 SPDK_TEST_NVMF_NICS=e810 00:01:25.980 SPDK_TEST_VFIOUSER=1 00:01:25.980 SPDK_RUN_UBSAN=1 00:01:25.980 NET_TYPE=phy 00:01:25.980 SPDK_TEST_NATIVE_DPDK=main 00:01:25.980 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.980 RUN_NIGHTLY=1 08:47:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:25.980 08:47:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.980 08:47:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.980 08:47:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.980 08:47:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.980 08:47:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.980 08:47:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.980 08:47:04 -- paths/export.sh@5 -- $ export PATH 00:01:25.980 08:47:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.980 08:47:04 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:25.980 08:47:04 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:25.980 08:47:04 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721803624.XXXXXX 00:01:25.980 08:47:04 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721803624.xDYfRd 00:01:25.980 08:47:04 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:25.980 08:47:04 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:01:25.980 08:47:04 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.980 08:47:04 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:25.980 08:47:04 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.980 08:47:04 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.980 08:47:04 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:25.980 08:47:04 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:25.980 08:47:04 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.980 08:47:04 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:25.980 08:47:04 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:25.980 08:47:04 -- pm/common@17 -- $ local monitor 00:01:25.980 08:47:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.980 08:47:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.980 08:47:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.980 08:47:04 -- pm/common@21 -- $ date +%s 00:01:25.980 08:47:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:25.980 08:47:04 -- pm/common@21 -- $ date +%s 00:01:25.980 08:47:04 -- pm/common@25 -- $ sleep 1 00:01:25.980 08:47:04 -- pm/common@21 -- $ date +%s 00:01:25.980 08:47:04 -- pm/common@21 -- $ date +%s 00:01:25.980 08:47:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721803624 00:01:25.980 08:47:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721803624 00:01:25.980 08:47:04 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721803624 00:01:25.980 08:47:04 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721803624 00:01:25.980 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721803624_collect-vmstat.pm.log 00:01:25.980 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721803624_collect-cpu-load.pm.log 00:01:25.980 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721803624_collect-cpu-temp.pm.log 00:01:25.980 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721803624_collect-bmc-pm.bmc.pm.log 00:01:27.360 08:47:05 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:27.360 08:47:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:27.360 08:47:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:27.360 08:47:05 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.360 08:47:05 -- spdk/autobuild.sh@16 -- $ date -u 00:01:27.360 Wed Jul 24 06:47:05 AM UTC 2024 00:01:27.360 08:47:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:27.360 v24.09-pre-309-g78cbcfdde 00:01:27.360 08:47:05 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:27.360 08:47:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:27.360 08:47:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:27.360 08:47:05 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:27.360 08:47:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.360 08:47:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.360 ************************************ 00:01:27.360 START TEST ubsan 00:01:27.360 ************************************ 00:01:27.360 08:47:05 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:27.360 using ubsan 00:01:27.360 00:01:27.360 real 0m0.000s 00:01:27.360 user 0m0.000s 00:01:27.360 sys 0m0.000s 00:01:27.360 08:47:05 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:27.360 08:47:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:27.360 ************************************ 00:01:27.360 END TEST ubsan 00:01:27.360 ************************************ 00:01:27.360 08:47:05 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:01:27.360 08:47:05 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:27.360 08:47:05 -- common/autobuild_common.sh@439 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:27.360 08:47:05 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:27.360 08:47:05 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:27.360 08:47:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.360 ************************************ 00:01:27.360 START TEST build_native_dpdk 00:01:27.360 ************************************ 00:01:27.360 08:47:05 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:27.360 82c47f005b version: 24.07-rc3 00:01:27.360 d9d1be537e doc: remove reference to mbuf pkt field 00:01:27.360 52c7393a03 doc: set required MinGW version in Windows guide 00:01:27.360 92439dc9ac dts: improve starting and stopping interactive shells 00:01:27.360 2b648cd4e4 dts: add context manager for interactive shells 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc3 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:27.360 08:47:05 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc3 21.11.0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 21.11.0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:27.361 patching file config/rte_config.h 00:01:27.361 Hunk #1 succeeded at 70 (offset 11 lines). 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.07.0-rc3 24.07.0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc3 '<' 24.07.0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=4 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=24 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 07 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=7 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 07 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=07 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 07 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 7 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=7 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 0 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal rc3 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=rc3 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ rc3 =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^0x ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@353 -- $ [[ rc3 =~ ^[a-f0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal '' 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d= 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ '' =~ ^[0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^0x ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@353 -- $ [[ '' =~ ^[a-f0-9]+$ ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@357 -- $ echo 0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=0 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@365 -- $ (( ver1[v] < ver2[v] )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v++ )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@367 -- $ [[ 24 7 0 0 == \2\4\ \7\ \0\ \0 ]] 00:01:27.361 08:47:05 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@181 -- $ uname -s 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:27.361 08:47:05 build_native_dpdk -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:31.560 The Meson build system 00:01:31.560 Version: 1.3.1 00:01:31.560 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:31.560 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:31.560 Build type: native build 00:01:31.560 Program cat found: YES (/usr/bin/cat) 00:01:31.560 Project name: DPDK 00:01:31.560 Project version: 24.07.0-rc3 00:01:31.560 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:31.560 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:31.560 Host machine cpu family: x86_64 00:01:31.560 Host machine cpu: x86_64 00:01:31.560 Message: ## Building in Developer Mode ## 00:01:31.560 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:31.560 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:31.560 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:31.560 Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools 00:01:31.560 Program cat found: YES (/usr/bin/cat) 00:01:31.560 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:31.560 Compiler for C supports arguments -march=native: YES 00:01:31.560 Checking for size of "void *" : 8 00:01:31.560 Checking for size of "void *" : 8 (cached) 00:01:31.560 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:31.560 Library m found: YES 00:01:31.560 Library numa found: YES 00:01:31.560 Has header "numaif.h" : YES 00:01:31.560 Library fdt found: NO 00:01:31.560 Library execinfo found: NO 00:01:31.560 Has header "execinfo.h" : YES 00:01:31.560 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.560 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:31.560 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:31.560 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:31.560 Run-time dependency openssl found: YES 3.0.9 00:01:31.560 Run-time dependency libpcap found: YES 1.10.4 00:01:31.560 Has header "pcap.h" with dependency libpcap: YES 00:01:31.560 Compiler for C supports arguments -Wcast-qual: YES 00:01:31.560 Compiler for C supports arguments -Wdeprecated: YES 00:01:31.560 Compiler for C supports arguments -Wformat: YES 00:01:31.560 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:31.560 Compiler for C supports arguments -Wformat-security: NO 00:01:31.560 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.561 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:31.561 Compiler for C supports arguments -Wnested-externs: YES 00:01:31.561 Compiler for C supports arguments -Wold-style-definition: YES 00:01:31.561 Compiler for C supports arguments -Wpointer-arith: YES 00:01:31.561 Compiler for C supports arguments -Wsign-compare: YES 00:01:31.561 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:31.561 Compiler for C supports arguments -Wundef: YES 00:01:31.561 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.561 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:31.561 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:31.561 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.561 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:31.561 Program objdump found: YES (/usr/bin/objdump) 00:01:31.561 Compiler for C supports arguments -mavx512f: YES 00:01:31.561 Checking if "AVX512 checking" compiles: YES 00:01:31.561 Fetching value of define "__SSE4_2__" : 1 00:01:31.561 Fetching value of define "__AES__" : 1 00:01:31.561 Fetching value of define "__AVX__" : 1 00:01:31.561 Fetching value of define "__AVX2__" : (undefined) 00:01:31.561 Fetching value of define "__AVX512BW__" : (undefined) 00:01:31.561 Fetching value of define "__AVX512CD__" : (undefined) 00:01:31.561 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:31.561 Fetching value of define "__AVX512F__" : (undefined) 00:01:31.561 Fetching value of define "__AVX512VL__" : (undefined) 00:01:31.561 Fetching value of define "__PCLMUL__" : 1 00:01:31.561 Fetching value of define "__RDRND__" : 1 00:01:31.561 Fetching value of define "__RDSEED__" : (undefined) 00:01:31.561 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:31.561 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:31.561 Message: lib/log: Defining dependency "log" 00:01:31.561 Message: lib/kvargs: Defining dependency "kvargs" 00:01:31.561 Message: lib/argparse: Defining dependency "argparse" 00:01:31.561 Message: lib/telemetry: Defining dependency "telemetry" 00:01:31.561 Checking for function "getentropy" : NO 00:01:31.561 Message: lib/eal: Defining dependency "eal" 00:01:31.561 Message: lib/ptr_compress: Defining dependency "ptr_compress" 00:01:31.561 Message: lib/ring: Defining dependency "ring" 00:01:31.561 Message: lib/rcu: Defining dependency "rcu" 00:01:31.561 Message: lib/mempool: Defining dependency "mempool" 00:01:31.561 Message: lib/mbuf: Defining dependency "mbuf" 00:01:31.561 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:31.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.561 Compiler for C supports arguments -mpclmul: YES 00:01:31.561 Compiler for C supports arguments -maes: YES 00:01:31.561 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.561 Compiler for C supports arguments -mavx512bw: YES 00:01:31.561 Compiler for C supports arguments -mavx512dq: YES 00:01:31.561 Compiler for C supports arguments -mavx512vl: YES 00:01:31.561 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:31.561 Compiler for C supports arguments -mavx2: YES 00:01:31.561 Compiler for C supports arguments -mavx: YES 00:01:31.561 Message: lib/net: Defining dependency "net" 00:01:31.561 Message: lib/meter: Defining dependency "meter" 00:01:31.561 Message: lib/ethdev: Defining dependency "ethdev" 00:01:31.561 Message: lib/pci: Defining dependency "pci" 00:01:31.561 Message: lib/cmdline: Defining dependency "cmdline" 00:01:31.561 Message: lib/metrics: Defining dependency "metrics" 00:01:31.561 Message: lib/hash: Defining dependency "hash" 00:01:31.561 Message: lib/timer: Defining dependency "timer" 00:01:31.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.561 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:31.561 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:31.561 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:31.561 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:31.561 Message: lib/acl: Defining dependency "acl" 00:01:31.561 Message: lib/bbdev: Defining dependency "bbdev" 00:01:31.561 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:31.561 Run-time dependency libelf found: YES 0.190 00:01:31.561 Message: lib/bpf: Defining dependency "bpf" 00:01:31.561 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:31.561 Message: lib/compressdev: Defining dependency "compressdev" 00:01:31.561 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:31.561 Message: lib/distributor: Defining dependency "distributor" 00:01:31.561 Message: lib/dmadev: Defining dependency "dmadev" 00:01:31.561 Message: lib/efd: Defining dependency "efd" 00:01:31.561 Message: lib/eventdev: Defining dependency "eventdev" 00:01:31.561 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:31.561 Message: lib/gpudev: Defining dependency "gpudev" 00:01:31.561 Message: lib/gro: Defining dependency "gro" 00:01:31.561 Message: lib/gso: Defining dependency "gso" 00:01:31.561 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:31.561 Message: lib/jobstats: Defining dependency "jobstats" 00:01:31.561 Message: lib/latencystats: Defining dependency "latencystats" 00:01:31.561 Message: lib/lpm: Defining dependency "lpm" 00:01:31.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.561 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:31.561 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:31.561 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:31.561 Message: lib/member: Defining dependency "member" 00:01:31.561 Message: lib/pcapng: Defining dependency "pcapng" 00:01:31.561 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:31.561 Message: lib/power: Defining dependency "power" 00:01:31.561 Message: lib/rawdev: Defining dependency "rawdev" 00:01:31.561 Message: lib/regexdev: Defining dependency "regexdev" 00:01:31.561 Message: lib/mldev: Defining dependency "mldev" 00:01:31.561 Message: lib/rib: Defining dependency "rib" 00:01:31.561 Message: lib/reorder: Defining dependency "reorder" 00:01:31.561 Message: lib/sched: Defining dependency "sched" 00:01:31.561 Message: lib/security: Defining dependency "security" 00:01:31.561 Message: lib/stack: Defining dependency "stack" 00:01:31.561 Has header "linux/userfaultfd.h" : YES 00:01:31.561 Has header "linux/vduse.h" : YES 00:01:31.561 Message: lib/vhost: Defining dependency "vhost" 00:01:31.561 Message: lib/ipsec: Defining dependency "ipsec" 00:01:31.561 Message: lib/pdcp: Defining dependency "pdcp" 00:01:31.561 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.561 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:31.561 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:31.561 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:31.561 Message: lib/fib: Defining dependency "fib" 00:01:31.561 Message: lib/port: Defining dependency "port" 00:01:31.561 Message: lib/pdump: Defining dependency "pdump" 00:01:31.561 Message: lib/table: Defining dependency "table" 00:01:31.561 Message: lib/pipeline: Defining dependency "pipeline" 00:01:31.561 Message: lib/graph: Defining dependency "graph" 00:01:31.561 Message: lib/node: Defining dependency "node" 00:01:32.939 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:32.939 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:32.939 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:32.939 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:32.939 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:32.939 Compiler for C supports arguments -Wno-unused-value: YES 00:01:32.939 Compiler for C supports arguments -Wno-format: YES 00:01:32.939 Compiler for C supports arguments -Wno-format-security: YES 00:01:32.939 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:32.939 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:32.939 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:32.939 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:32.939 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:32.939 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:32.939 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:32.939 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:32.939 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:32.939 Has header "sys/epoll.h" : YES 00:01:32.939 Program doxygen found: YES (/usr/bin/doxygen) 00:01:32.939 Configuring doxy-api-html.conf using configuration 00:01:32.939 Configuring doxy-api-man.conf using configuration 00:01:32.939 Program mandb found: YES (/usr/bin/mandb) 00:01:32.939 Program sphinx-build found: NO 00:01:32.939 Configuring rte_build_config.h using configuration 00:01:32.939 Message: 00:01:32.939 ================= 00:01:32.939 Applications Enabled 00:01:32.939 ================= 00:01:32.939 00:01:32.939 apps: 00:01:32.939 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:32.939 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:32.939 test-pmd, test-regex, test-sad, test-security-perf, 00:01:32.939 00:01:32.939 Message: 00:01:32.939 ================= 00:01:32.939 Libraries Enabled 00:01:32.939 ================= 00:01:32.939 00:01:32.939 libs: 00:01:32.939 log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 00:01:32.939 mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 00:01:32.939 hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 00:01:32.939 cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 00:01:32.939 gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 00:01:32.939 rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 00:01:32.939 vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 00:01:32.939 graph, node, 00:01:32.939 00:01:32.939 Message: 00:01:32.939 =============== 00:01:32.939 Drivers Enabled 00:01:32.939 =============== 00:01:32.939 00:01:32.939 common: 00:01:32.939 00:01:32.939 bus: 00:01:32.939 pci, vdev, 00:01:32.939 mempool: 00:01:32.939 ring, 00:01:32.939 dma: 00:01:32.939 00:01:32.939 net: 00:01:32.939 i40e, 00:01:32.939 raw: 00:01:32.939 00:01:32.939 crypto: 00:01:32.939 00:01:32.939 compress: 00:01:32.939 00:01:32.939 regex: 00:01:32.939 00:01:32.939 ml: 00:01:32.939 00:01:32.939 vdpa: 00:01:32.939 00:01:32.939 event: 00:01:32.939 00:01:32.939 baseband: 00:01:32.939 00:01:32.939 gpu: 00:01:32.939 00:01:32.939 00:01:32.939 Message: 00:01:32.939 ================= 00:01:32.939 Content Skipped 00:01:32.939 ================= 00:01:32.939 00:01:32.939 apps: 00:01:32.939 00:01:32.939 libs: 00:01:32.939 00:01:32.939 drivers: 00:01:32.939 common/cpt: not in enabled drivers build config 00:01:32.939 common/dpaax: not in enabled drivers build config 00:01:32.939 common/iavf: not in enabled drivers build config 00:01:32.939 common/idpf: not in enabled drivers build config 00:01:32.939 common/ionic: not in enabled drivers build config 00:01:32.939 common/mvep: not in enabled drivers build config 00:01:32.939 common/octeontx: not in enabled drivers build config 00:01:32.939 bus/auxiliary: not in enabled drivers build config 00:01:32.939 bus/cdx: not in enabled drivers build config 00:01:32.939 bus/dpaa: not in enabled drivers build config 00:01:32.939 bus/fslmc: not in enabled drivers build config 00:01:32.939 bus/ifpga: not in enabled drivers build config 00:01:32.939 bus/platform: not in enabled drivers build config 00:01:32.939 bus/uacce: not in enabled drivers build config 00:01:32.939 bus/vmbus: not in enabled drivers build config 00:01:32.939 common/cnxk: not in enabled drivers build config 00:01:32.939 common/mlx5: not in enabled drivers build config 00:01:32.939 common/nfp: not in enabled drivers build config 00:01:32.939 common/nitrox: not in enabled drivers build config 00:01:32.939 common/qat: not in enabled drivers build config 00:01:32.939 common/sfc_efx: not in enabled drivers build config 00:01:32.939 mempool/bucket: not in enabled drivers build config 00:01:32.939 mempool/cnxk: not in enabled drivers build config 00:01:32.939 mempool/dpaa: not in enabled drivers build config 00:01:32.939 mempool/dpaa2: not in enabled drivers build config 00:01:32.939 mempool/octeontx: not in enabled drivers build config 00:01:32.939 mempool/stack: not in enabled drivers build config 00:01:32.939 dma/cnxk: not in enabled drivers build config 00:01:32.939 dma/dpaa: not in enabled drivers build config 00:01:32.939 dma/dpaa2: not in enabled drivers build config 00:01:32.939 dma/hisilicon: not in enabled drivers build config 00:01:32.939 dma/idxd: not in enabled drivers build config 00:01:32.939 dma/ioat: not in enabled drivers build config 00:01:32.939 dma/odm: not in enabled drivers build config 00:01:32.939 dma/skeleton: not in enabled drivers build config 00:01:32.939 net/af_packet: not in enabled drivers build config 00:01:32.939 net/af_xdp: not in enabled drivers build config 00:01:32.939 net/ark: not in enabled drivers build config 00:01:32.939 net/atlantic: not in enabled drivers build config 00:01:32.939 net/avp: not in enabled drivers build config 00:01:32.939 net/axgbe: not in enabled drivers build config 00:01:32.939 net/bnx2x: not in enabled drivers build config 00:01:32.939 net/bnxt: not in enabled drivers build config 00:01:32.939 net/bonding: not in enabled drivers build config 00:01:32.939 net/cnxk: not in enabled drivers build config 00:01:32.939 net/cpfl: not in enabled drivers build config 00:01:32.939 net/cxgbe: not in enabled drivers build config 00:01:32.939 net/dpaa: not in enabled drivers build config 00:01:32.939 net/dpaa2: not in enabled drivers build config 00:01:32.939 net/e1000: not in enabled drivers build config 00:01:32.939 net/ena: not in enabled drivers build config 00:01:32.939 net/enetc: not in enabled drivers build config 00:01:32.939 net/enetfec: not in enabled drivers build config 00:01:32.939 net/enic: not in enabled drivers build config 00:01:32.939 net/failsafe: not in enabled drivers build config 00:01:32.939 net/fm10k: not in enabled drivers build config 00:01:32.939 net/gve: not in enabled drivers build config 00:01:32.939 net/hinic: not in enabled drivers build config 00:01:32.939 net/hns3: not in enabled drivers build config 00:01:32.939 net/iavf: not in enabled drivers build config 00:01:32.939 net/ice: not in enabled drivers build config 00:01:32.939 net/idpf: not in enabled drivers build config 00:01:32.939 net/igc: not in enabled drivers build config 00:01:32.939 net/ionic: not in enabled drivers build config 00:01:32.939 net/ipn3ke: not in enabled drivers build config 00:01:32.940 net/ixgbe: not in enabled drivers build config 00:01:32.940 net/mana: not in enabled drivers build config 00:01:32.940 net/memif: not in enabled drivers build config 00:01:32.940 net/mlx4: not in enabled drivers build config 00:01:32.940 net/mlx5: not in enabled drivers build config 00:01:32.940 net/mvneta: not in enabled drivers build config 00:01:32.940 net/mvpp2: not in enabled drivers build config 00:01:32.940 net/netvsc: not in enabled drivers build config 00:01:32.940 net/nfb: not in enabled drivers build config 00:01:32.940 net/nfp: not in enabled drivers build config 00:01:32.940 net/ngbe: not in enabled drivers build config 00:01:32.940 net/ntnic: not in enabled drivers build config 00:01:32.940 net/null: not in enabled drivers build config 00:01:32.940 net/octeontx: not in enabled drivers build config 00:01:32.940 net/octeon_ep: not in enabled drivers build config 00:01:32.940 net/pcap: not in enabled drivers build config 00:01:32.940 net/pfe: not in enabled drivers build config 00:01:32.940 net/qede: not in enabled drivers build config 00:01:32.940 net/ring: not in enabled drivers build config 00:01:32.940 net/sfc: not in enabled drivers build config 00:01:32.940 net/softnic: not in enabled drivers build config 00:01:32.940 net/tap: not in enabled drivers build config 00:01:32.940 net/thunderx: not in enabled drivers build config 00:01:32.940 net/txgbe: not in enabled drivers build config 00:01:32.940 net/vdev_netvsc: not in enabled drivers build config 00:01:32.940 net/vhost: not in enabled drivers build config 00:01:32.940 net/virtio: not in enabled drivers build config 00:01:32.940 net/vmxnet3: not in enabled drivers build config 00:01:32.940 raw/cnxk_bphy: not in enabled drivers build config 00:01:32.940 raw/cnxk_gpio: not in enabled drivers build config 00:01:32.940 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:32.940 raw/ifpga: not in enabled drivers build config 00:01:32.940 raw/ntb: not in enabled drivers build config 00:01:32.940 raw/skeleton: not in enabled drivers build config 00:01:32.940 crypto/armv8: not in enabled drivers build config 00:01:32.940 crypto/bcmfs: not in enabled drivers build config 00:01:32.940 crypto/caam_jr: not in enabled drivers build config 00:01:32.940 crypto/ccp: not in enabled drivers build config 00:01:32.940 crypto/cnxk: not in enabled drivers build config 00:01:32.940 crypto/dpaa_sec: not in enabled drivers build config 00:01:32.940 crypto/dpaa2_sec: not in enabled drivers build config 00:01:32.940 crypto/ionic: not in enabled drivers build config 00:01:32.940 crypto/ipsec_mb: not in enabled drivers build config 00:01:32.940 crypto/mlx5: not in enabled drivers build config 00:01:32.940 crypto/mvsam: not in enabled drivers build config 00:01:32.940 crypto/nitrox: not in enabled drivers build config 00:01:32.940 crypto/null: not in enabled drivers build config 00:01:32.940 crypto/octeontx: not in enabled drivers build config 00:01:32.940 crypto/openssl: not in enabled drivers build config 00:01:32.940 crypto/scheduler: not in enabled drivers build config 00:01:32.940 crypto/uadk: not in enabled drivers build config 00:01:32.940 crypto/virtio: not in enabled drivers build config 00:01:32.940 compress/isal: not in enabled drivers build config 00:01:32.940 compress/mlx5: not in enabled drivers build config 00:01:32.940 compress/nitrox: not in enabled drivers build config 00:01:32.940 compress/octeontx: not in enabled drivers build config 00:01:32.940 compress/uadk: not in enabled drivers build config 00:01:32.940 compress/zlib: not in enabled drivers build config 00:01:32.940 regex/mlx5: not in enabled drivers build config 00:01:32.940 regex/cn9k: not in enabled drivers build config 00:01:32.940 ml/cnxk: not in enabled drivers build config 00:01:32.940 vdpa/ifc: not in enabled drivers build config 00:01:32.940 vdpa/mlx5: not in enabled drivers build config 00:01:32.940 vdpa/nfp: not in enabled drivers build config 00:01:32.940 vdpa/sfc: not in enabled drivers build config 00:01:32.940 event/cnxk: not in enabled drivers build config 00:01:32.940 event/dlb2: not in enabled drivers build config 00:01:32.940 event/dpaa: not in enabled drivers build config 00:01:32.940 event/dpaa2: not in enabled drivers build config 00:01:32.940 event/dsw: not in enabled drivers build config 00:01:32.940 event/opdl: not in enabled drivers build config 00:01:32.940 event/skeleton: not in enabled drivers build config 00:01:32.940 event/sw: not in enabled drivers build config 00:01:32.940 event/octeontx: not in enabled drivers build config 00:01:32.940 baseband/acc: not in enabled drivers build config 00:01:32.940 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:32.940 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:32.940 baseband/la12xx: not in enabled drivers build config 00:01:32.940 baseband/null: not in enabled drivers build config 00:01:32.940 baseband/turbo_sw: not in enabled drivers build config 00:01:32.940 gpu/cuda: not in enabled drivers build config 00:01:32.940 00:01:32.940 00:01:32.940 Build targets in project: 224 00:01:32.940 00:01:32.940 DPDK 24.07.0-rc3 00:01:32.940 00:01:32.940 User defined options 00:01:32.940 libdir : lib 00:01:32.940 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:32.940 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:32.940 c_link_args : 00:01:32.940 enable_docs : false 00:01:32.940 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:32.940 enable_kmods : false 00:01:32.940 machine : native 00:01:32.940 tests : false 00:01:32.940 00:01:32.940 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.940 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:32.940 08:47:10 build_native_dpdk -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:32.940 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:33.202 [1/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:33.202 [2/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:33.202 [3/723] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:33.202 [4/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:33.202 [5/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:33.202 [6/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:33.202 [7/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:33.202 [8/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:33.202 [9/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:33.202 [10/723] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:33.202 [11/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:33.202 [12/723] Linking static target lib/librte_kvargs.a 00:01:33.461 [13/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:33.461 [14/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:33.461 [15/723] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:33.461 [16/723] Linking static target lib/librte_log.a 00:01:33.726 [17/723] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:01:33.726 [18/723] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.726 [19/723] Linking static target lib/librte_argparse.a 00:01:33.990 [20/723] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.258 [21/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:34.258 [22/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:34.258 [23/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:34.258 [24/723] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.258 [25/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:34.258 [26/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:34.258 [27/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:34.258 [28/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:34.258 [29/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:34.258 [30/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:34.258 [31/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:34.258 [32/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:34.258 [33/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:34.258 [34/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:34.258 [35/723] Linking target lib/librte_log.so.24.2 00:01:34.258 [36/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:34.259 [37/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:34.259 [38/723] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:34.259 [39/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:34.259 [40/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:34.259 [41/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:34.259 [42/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:34.259 [43/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:34.259 [44/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:34.259 [45/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:34.259 [46/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:34.259 [47/723] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:34.521 [48/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:34.521 [49/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:34.521 [50/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:34.521 [51/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:34.521 [52/723] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:34.521 [53/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:34.521 [54/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:34.521 [55/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:34.521 [56/723] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:01:34.521 [57/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:34.521 [58/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:34.521 [59/723] Linking target lib/librte_kvargs.so.24.2 00:01:34.521 [60/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:34.521 [61/723] Linking target lib/librte_argparse.so.24.2 00:01:34.521 [62/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:34.781 [63/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:34.781 [64/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:34.781 [65/723] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:01:34.781 [66/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:35.054 [67/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:35.054 [68/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:35.054 [69/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:35.054 [70/723] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:35.054 [71/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:35.054 [72/723] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:35.321 [73/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:35.321 [74/723] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:35.321 [75/723] Linking static target lib/librte_pci.a 00:01:35.321 [76/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:35.321 [77/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:35.321 [78/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:35.321 [79/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:35.321 [80/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:35.321 [81/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o 00:01:35.588 [82/723] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:35.588 [83/723] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:35.588 [84/723] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:35.588 [85/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:35.588 [86/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:35.588 [87/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:35.588 [88/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:35.588 [89/723] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:35.588 [90/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:35.588 [91/723] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:35.588 [92/723] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:35.588 [93/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:35.588 [94/723] Linking static target lib/librte_ring.a 00:01:35.588 [95/723] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:35.588 [96/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:35.588 [97/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:35.588 [98/723] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.588 [99/723] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:35.588 [100/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:35.588 [101/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:35.588 [102/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:35.588 [103/723] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:35.588 [104/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:35.588 [105/723] Linking static target lib/librte_meter.a 00:01:35.588 [106/723] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:35.588 [107/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:35.848 [108/723] Linking static target lib/librte_telemetry.a 00:01:35.848 [109/723] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:35.848 [110/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:35.848 [111/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:35.848 [112/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:35.848 [113/723] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:35.848 [114/723] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:35.848 [115/723] Linking static target lib/librte_net.a 00:01:36.113 [116/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:36.113 [117/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:36.113 [118/723] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.113 [119/723] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.113 [120/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:36.113 [121/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:36.113 [122/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:36.113 [123/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:36.113 [124/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:36.113 [125/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:36.377 [126/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:36.377 [127/723] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.377 [128/723] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.377 [129/723] Linking target lib/librte_telemetry.so.24.2 00:01:36.377 [130/723] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:36.377 [131/723] Linking static target lib/librte_mempool.a 00:01:36.636 [132/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:36.636 [133/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:36.636 [134/723] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:36.636 [135/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:36.636 [136/723] Linking static target lib/librte_eal.a 00:01:36.636 [137/723] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:36.636 [138/723] Linking static target lib/librte_cmdline.a 00:01:36.636 [139/723] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:01:36.636 [140/723] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:36.636 [141/723] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:36.636 [142/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:36.636 [143/723] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:36.899 [144/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:36.899 [145/723] Linking static target lib/librte_cfgfile.a 00:01:36.899 [146/723] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:36.899 [147/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:36.899 [148/723] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:36.899 [149/723] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:36.899 [150/723] Linking static target lib/librte_metrics.a 00:01:36.899 [151/723] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:36.899 [152/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:37.166 [153/723] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:37.166 [154/723] Linking static target lib/librte_rcu.a 00:01:37.166 [155/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:37.166 [156/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:37.166 [157/723] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:37.166 [158/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:37.166 [159/723] Linking static target lib/librte_bitratestats.a 00:01:37.166 [160/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:37.430 [161/723] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.430 [162/723] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:37.430 [163/723] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:37.430 [164/723] Linking static target lib/librte_mbuf.a 00:01:37.430 [165/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:37.430 [166/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:37.430 [167/723] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.430 [168/723] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:37.430 [169/723] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.430 [170/723] Linking static target lib/librte_timer.a 00:01:37.430 [171/723] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.430 [172/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:37.430 [173/723] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.694 [174/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:37.694 [175/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:37.694 [176/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:37.694 [177/723] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:37.694 [178/723] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:37.694 [179/723] Linking static target lib/librte_bbdev.a 00:01:37.694 [180/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:37.958 [181/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:37.958 [182/723] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:37.958 [183/723] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.958 [184/723] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:37.958 [185/723] Linking static target lib/librte_compressdev.a 00:01:37.958 [186/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:37.958 [187/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:37.958 [188/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:37.958 [189/723] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.218 [190/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:38.218 [191/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:38.218 [192/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:38.218 [193/723] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.488 [194/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:38.753 [195/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:38.753 [196/723] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:38.753 [197/723] Linking static target lib/librte_dmadev.a 00:01:38.753 [198/723] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:38.753 [199/723] Linking static target lib/librte_distributor.a 00:01:38.753 [200/723] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.753 [201/723] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:38.753 [202/723] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.753 [203/723] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:38.753 [204/723] Linking static target lib/librte_bpf.a 00:01:39.017 [205/723] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:39.017 [206/723] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:39.017 [207/723] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:39.017 [208/723] Linking static target lib/librte_dispatcher.a 00:01:39.017 [209/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:39.017 [210/723] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:39.017 [211/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:39.017 [212/723] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:39.017 [213/723] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:39.017 [214/723] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:39.281 [215/723] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.281 [216/723] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:39.281 [217/723] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:39.281 [218/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:39.281 [219/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:39.281 [220/723] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:39.281 [221/723] Linking static target lib/librte_gpudev.a 00:01:39.281 [222/723] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:39.281 [223/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:39.281 [224/723] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:39.281 [225/723] Linking static target lib/librte_gro.a 00:01:39.281 [226/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:39.281 [227/723] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:39.281 [228/723] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.281 [229/723] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:39.281 [230/723] Linking static target lib/librte_jobstats.a 00:01:39.281 [231/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:39.541 [232/723] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:39.541 [233/723] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.541 [234/723] Linking static target lib/librte_gso.a 00:01:39.541 [235/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:39.541 [236/723] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.541 [237/723] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:39.806 [238/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:39.806 [239/723] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:39.806 [240/723] Linking static target lib/librte_latencystats.a 00:01:39.806 [241/723] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.806 [242/723] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.806 [243/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:39.806 [244/723] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:39.806 [245/723] Linking static target lib/librte_ip_frag.a 00:01:39.806 [246/723] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.066 [247/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:40.066 [248/723] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:40.066 [249/723] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:40.066 [250/723] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:40.066 [251/723] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:40.066 [252/723] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:40.066 [253/723] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:40.066 [254/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:40.066 [255/723] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.066 [256/723] Linking static target lib/librte_efd.a 00:01:40.066 [257/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:40.327 [258/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:40.327 [259/723] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.327 [260/723] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:40.327 [261/723] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:40.327 [262/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:40.592 [263/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:40.592 [264/723] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:40.592 [265/723] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.592 [266/723] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.592 [267/723] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:40.592 [268/723] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:40.592 [269/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:40.592 [270/723] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:40.852 [271/723] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:40.852 [272/723] Linking static target lib/librte_regexdev.a 00:01:40.852 [273/723] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:40.852 [274/723] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:40.852 [275/723] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:40.852 [276/723] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:40.852 [277/723] Linking static target lib/librte_rawdev.a 00:01:40.852 [278/723] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:40.852 [279/723] Linking static target lib/librte_pcapng.a 00:01:40.852 [280/723] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:40.852 [281/723] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:40.852 [282/723] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:40.852 [283/723] Linking static target lib/librte_lpm.a 00:01:41.117 [284/723] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:41.117 [285/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:41.117 [286/723] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:41.117 [287/723] Linking static target lib/librte_mldev.a 00:01:41.117 [288/723] Linking static target lib/librte_power.a 00:01:41.117 [289/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:41.117 [290/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:41.117 [291/723] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:41.117 [292/723] Linking static target lib/librte_stack.a 00:01:41.117 [293/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:41.379 [294/723] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.379 [295/723] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:41.379 [296/723] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:41.379 [297/723] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:01:41.379 [298/723] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:41.379 [299/723] Linking static target lib/librte_reorder.a 00:01:41.379 [300/723] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.379 [301/723] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:41.642 [302/723] Linking static target lib/acl/libavx2_tmp.a 00:01:41.642 [303/723] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:41.642 [304/723] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.642 [305/723] Linking static target lib/librte_security.a 00:01:41.642 [306/723] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:41.642 [307/723] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:41.642 [308/723] Linking static target lib/librte_cryptodev.a 00:01:41.642 [309/723] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:41.642 [310/723] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.642 [311/723] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:41.642 [312/723] Linking static target lib/librte_hash.a 00:01:41.906 [313/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:41.906 [314/723] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:41.906 [315/723] Linking static target lib/acl/libavx512_tmp.a 00:01:41.906 [316/723] Linking static target lib/librte_acl.a 00:01:41.906 [317/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:41.906 [318/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:41.906 [319/723] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.906 [320/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:41.906 [321/723] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:41.906 [322/723] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.906 [323/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:41.906 [324/723] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:41.906 [325/723] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:41.906 [326/723] Linking static target lib/librte_rib.a 00:01:41.906 [327/723] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.173 [328/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:42.173 [329/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:42.173 [330/723] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:42.174 [331/723] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:42.174 [332/723] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:42.174 [333/723] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.174 [334/723] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:42.174 [335/723] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:42.174 [336/723] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:42.434 [337/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:42.434 [338/723] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:42.434 [339/723] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.434 [340/723] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:42.697 [341/723] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.697 [342/723] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:01:42.697 [343/723] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.961 [344/723] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:43.221 [345/723] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:43.221 [346/723] Linking static target lib/librte_eventdev.a 00:01:43.221 [347/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:43.221 [348/723] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:43.221 [349/723] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:43.221 [350/723] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:43.221 [351/723] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.221 [352/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:43.221 [353/723] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:43.483 [354/723] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:43.483 [355/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:43.483 [356/723] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:43.483 [357/723] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:43.483 [358/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:43.483 [359/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:43.483 [360/723] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:43.483 [361/723] Linking static target lib/librte_sched.a 00:01:43.483 [362/723] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:43.483 [363/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:43.483 [364/723] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:43.483 [365/723] Linking static target lib/librte_member.a 00:01:43.748 [366/723] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:43.748 [367/723] Linking static target lib/librte_fib.a 00:01:43.748 [368/723] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.748 [369/723] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:43.748 [370/723] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:43.748 [371/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:43.748 [372/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:43.748 [373/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:43.748 [374/723] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:43.748 [375/723] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:43.748 [376/723] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:43.748 [377/723] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:43.748 [378/723] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:44.011 [379/723] Linking static target lib/librte_ethdev.a 00:01:44.011 [380/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:44.011 [381/723] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:44.011 [382/723] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.011 [383/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:44.011 [384/723] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:44.272 [385/723] Linking static target lib/librte_ipsec.a 00:01:44.272 [386/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:44.272 [387/723] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.273 [388/723] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.273 [389/723] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:44.273 [390/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:44.536 [391/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:44.536 [392/723] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:44.536 [393/723] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:44.536 [394/723] Linking static target lib/librte_pdump.a 00:01:44.536 [395/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:44.536 [396/723] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:44.804 [397/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:44.804 [398/723] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:44.804 [399/723] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.804 [400/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:44.804 [401/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:44.804 [402/723] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:44.804 [403/723] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:44.804 [404/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:44.804 [405/723] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:44.804 [406/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:45.068 [407/723] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:45.068 [408/723] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:45.068 [409/723] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:45.068 [410/723] Linking static target lib/librte_pdcp.a 00:01:45.068 [411/723] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:45.068 [412/723] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.068 [413/723] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:45.068 [414/723] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:45.068 [415/723] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:45.068 [416/723] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:45.336 [417/723] Linking static target lib/librte_table.a 00:01:45.336 [418/723] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:45.336 [419/723] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:45.336 [420/723] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:45.596 [421/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:45.596 [422/723] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.596 [423/723] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:45.596 [424/723] Linking static target lib/librte_graph.a 00:01:45.861 [425/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:45.861 [426/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:45.861 [427/723] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:45.861 [428/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:45.861 [429/723] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:01:45.861 [430/723] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:45.861 [431/723] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:45.861 [432/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:45.861 [433/723] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:46.124 [434/723] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:46.124 [435/723] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:46.124 [436/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:46.124 [437/723] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:46.124 [438/723] Linking static target lib/librte_port.a 00:01:46.124 [439/723] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:46.390 [440/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:46.390 [441/723] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.390 [442/723] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:46.390 [443/723] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:46.390 [444/723] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:46.390 [445/723] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.390 [446/723] Linking static target drivers/librte_bus_vdev.a 00:01:46.390 [447/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:46.652 [448/723] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.652 [449/723] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:46.652 [450/723] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.652 [451/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:46.652 [452/723] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:46.652 [453/723] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.652 [454/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:46.652 [455/723] Linking static target drivers/librte_bus_pci.a 00:01:46.652 [456/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:46.916 [457/723] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:46.916 [458/723] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:46.916 [459/723] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:46.916 [460/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:46.916 [461/723] Linking static target lib/librte_node.a 00:01:46.916 [462/723] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.916 [463/723] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.916 [464/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:46.916 [465/723] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:46.916 [466/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:46.916 [467/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:47.178 [468/723] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:47.178 [469/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:47.178 [470/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:47.178 [471/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:47.178 [472/723] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:47.179 [473/723] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:47.179 [474/723] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.179 [475/723] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.444 [476/723] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:47.444 [477/723] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:47.444 [478/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:47.444 [479/723] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.444 [480/723] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.444 [481/723] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:47.444 [482/723] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:47.703 [483/723] Linking target lib/librte_eal.so.24.2 00:01:47.703 [484/723] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.703 [485/723] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.703 [486/723] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:47.703 [487/723] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:47.703 [488/723] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.703 [489/723] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:01:47.703 [490/723] Linking static target drivers/librte_mempool_ring.a 00:01:47.703 [491/723] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:47.703 [492/723] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.703 [493/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:47.703 [494/723] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:47.993 [495/723] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:01:47.993 [496/723] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:47.993 [497/723] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:47.993 [498/723] Linking target lib/librte_ring.so.24.2 00:01:47.993 [499/723] Linking target lib/librte_meter.so.24.2 00:01:47.993 [500/723] Linking target lib/librte_pci.so.24.2 00:01:47.993 [501/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:47.993 [502/723] Linking target lib/librte_timer.so.24.2 00:01:47.993 [503/723] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:48.270 [504/723] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:01:48.270 [505/723] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:01:48.270 [506/723] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:01:48.270 [507/723] Linking target lib/librte_rcu.so.24.2 00:01:48.270 [508/723] Linking target lib/librte_acl.so.24.2 00:01:48.270 [509/723] Linking target lib/librte_mempool.so.24.2 00:01:48.270 [510/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:48.270 [511/723] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:48.270 [512/723] Linking target lib/librte_cfgfile.so.24.2 00:01:48.270 [513/723] Linking target lib/librte_dmadev.so.24.2 00:01:48.270 [514/723] Linking target lib/librte_jobstats.so.24.2 00:01:48.270 [515/723] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:01:48.270 [516/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:48.270 [517/723] Linking target lib/librte_rawdev.so.24.2 00:01:48.544 [518/723] Linking target lib/librte_stack.so.24.2 00:01:48.544 [519/723] Linking target drivers/librte_bus_pci.so.24.2 00:01:48.544 [520/723] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:01:48.544 [521/723] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:01:48.544 [522/723] Linking target drivers/librte_bus_vdev.so.24.2 00:01:48.544 [523/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:48.544 [524/723] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:01:48.544 [525/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:48.544 [526/723] Linking target lib/librte_mbuf.so.24.2 00:01:48.544 [527/723] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:01:48.544 [528/723] Linking target lib/librte_rib.so.24.2 00:01:48.544 [529/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:48.544 [530/723] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:48.544 [531/723] Linking target drivers/librte_mempool_ring.so.24.2 00:01:48.544 [532/723] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:01:48.544 [533/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:48.807 [534/723] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:01:48.807 [535/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:48.807 [536/723] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:01:48.807 [537/723] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:48.807 [538/723] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:01:48.807 [539/723] Linking target lib/librte_net.so.24.2 00:01:48.807 [540/723] Linking target lib/librte_bbdev.so.24.2 00:01:48.807 [541/723] Linking target lib/librte_compressdev.so.24.2 00:01:49.070 [542/723] Linking target lib/librte_distributor.so.24.2 00:01:49.070 [543/723] Linking target lib/librte_cryptodev.so.24.2 00:01:49.070 [544/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:49.070 [545/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:49.070 [546/723] Linking target lib/librte_gpudev.so.24.2 00:01:49.070 [547/723] Linking target lib/librte_regexdev.so.24.2 00:01:49.070 [548/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:49.070 [549/723] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:01:49.070 [550/723] Linking target lib/librte_mldev.so.24.2 00:01:49.070 [551/723] Linking target lib/librte_reorder.so.24.2 00:01:49.070 [552/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:49.070 [553/723] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:49.070 [554/723] Linking target lib/librte_fib.so.24.2 00:01:49.070 [555/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:49.070 [556/723] Linking target lib/librte_sched.so.24.2 00:01:49.070 [557/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:49.333 [558/723] Linking target lib/librte_cmdline.so.24.2 00:01:49.333 [559/723] Linking target lib/librte_hash.so.24.2 00:01:49.333 [560/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:49.333 [561/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:49.333 [562/723] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:01:49.333 [563/723] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:49.333 [564/723] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:49.333 [565/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:49.333 [566/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:49.333 [567/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:49.333 [568/723] Linking target lib/librte_security.so.24.2 00:01:49.333 [569/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:49.333 [570/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:49.333 [571/723] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:01:49.333 [572/723] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:01:49.333 [573/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:49.333 [574/723] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:49.333 [575/723] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:01:49.333 [576/723] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:49.333 [577/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:49.596 [578/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:49.596 [579/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:49.596 [580/723] Linking target lib/librte_efd.so.24.2 00:01:49.596 [581/723] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:49.596 [582/723] Linking target lib/librte_lpm.so.24.2 00:01:49.596 [583/723] Linking target lib/librte_member.so.24.2 00:01:49.596 [584/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:49.596 [585/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:49.596 [586/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:49.596 [587/723] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:01:49.596 [588/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:49.596 [589/723] Linking target lib/librte_ipsec.so.24.2 00:01:49.596 [590/723] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:49.596 [591/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:49.857 [592/723] Linking target lib/librte_pdcp.so.24.2 00:01:49.857 [593/723] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:01:49.857 [594/723] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:01:50.120 [595/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:50.120 [596/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:50.120 [597/723] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:50.384 [598/723] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:50.384 [599/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:50.384 [600/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:50.384 [601/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:50.384 [602/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:50.384 [603/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:50.384 [604/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:50.646 [605/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:50.646 [606/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:50.646 [607/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:50.646 [608/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:50.646 [609/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:50.646 [610/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:50.904 [611/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:50.904 [612/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:50.904 [613/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:50.904 [614/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:50.904 [615/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:50.904 [616/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:50.904 [617/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:50.904 [618/723] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:51.163 [619/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:51.163 [620/723] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:51.163 [621/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:51.163 [622/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:51.163 [623/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:51.422 [624/723] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:51.422 [625/723] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:51.681 [626/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:51.681 [627/723] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:51.681 [628/723] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:51.681 [629/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:51.681 [630/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:51.681 [631/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:51.681 [632/723] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:51.681 [633/723] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:51.939 [634/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:51.939 [635/723] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:51.939 [636/723] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:51.939 [637/723] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.939 [638/723] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:51.939 [639/723] Linking target lib/librte_ethdev.so.24.2 00:01:51.939 [640/723] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:51.939 [641/723] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:51.939 [642/723] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:52.198 [643/723] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:01:52.198 [644/723] Linking target lib/librte_gso.so.24.2 00:01:52.198 [645/723] Linking target lib/librte_gro.so.24.2 00:01:52.198 [646/723] Linking target lib/librte_metrics.so.24.2 00:01:52.198 [647/723] Linking target lib/librte_eventdev.so.24.2 00:01:52.198 [648/723] Linking target lib/librte_pcapng.so.24.2 00:01:52.198 [649/723] Linking target lib/librte_ip_frag.so.24.2 00:01:52.198 [650/723] Linking target lib/librte_bpf.so.24.2 00:01:52.198 [651/723] Linking target lib/librte_power.so.24.2 00:01:52.198 [652/723] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:01:52.198 [653/723] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:01:52.198 [654/723] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:01:52.198 [655/723] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:01:52.198 [656/723] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:01:52.198 [657/723] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:52.456 [658/723] Linking target lib/librte_latencystats.so.24.2 00:01:52.456 [659/723] Linking target lib/librte_bitratestats.so.24.2 00:01:52.456 [660/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:52.456 [661/723] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:01:52.456 [662/723] Linking target lib/librte_dispatcher.so.24.2 00:01:52.456 [663/723] Linking target lib/librte_pdump.so.24.2 00:01:52.456 [664/723] Linking target lib/librte_graph.so.24.2 00:01:52.456 [665/723] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:52.456 [666/723] Linking target lib/librte_port.so.24.2 00:01:52.456 [667/723] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:52.456 [668/723] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:01:52.456 [669/723] Linking target lib/librte_node.so.24.2 00:01:52.456 [670/723] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:52.456 [671/723] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:01:52.456 [672/723] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:52.715 [673/723] Linking target lib/librte_table.so.24.2 00:01:52.715 [674/723] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:52.715 [675/723] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:52.715 [676/723] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:01:53.281 [677/723] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:53.281 [678/723] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:53.281 [679/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:53.539 [680/723] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:53.797 [681/723] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:53.797 [682/723] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:53.797 [683/723] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:54.055 [684/723] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:54.055 [685/723] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:54.055 [686/723] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:54.055 [687/723] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.055 [688/723] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:54.312 [689/723] Linking static target drivers/librte_net_i40e.a 00:01:54.312 [690/723] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:54.878 [691/723] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.878 [692/723] Linking target drivers/librte_net_i40e.so.24.2 00:01:55.443 [693/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:55.443 [694/723] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:56.376 [695/723] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:04.488 [696/723] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:04.488 [697/723] Linking static target lib/librte_pipeline.a 00:02:04.488 [698/723] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:04.488 [699/723] Linking static target lib/librte_vhost.a 00:02:04.488 [700/723] Linking target app/dpdk-test-fib 00:02:04.488 [701/723] Linking target app/dpdk-pdump 00:02:04.488 [702/723] Linking target app/dpdk-test-flow-perf 00:02:04.488 [703/723] Linking target app/dpdk-test-gpudev 00:02:04.488 [704/723] Linking target app/dpdk-test-cmdline 00:02:04.488 [705/723] Linking target app/dpdk-dumpcap 00:02:04.488 [706/723] Linking target app/dpdk-test-crypto-perf 00:02:04.488 [707/723] Linking target app/dpdk-test-bbdev 00:02:04.488 [708/723] Linking target app/dpdk-test-mldev 00:02:04.488 [709/723] Linking target app/dpdk-test-regex 00:02:04.488 [710/723] Linking target app/dpdk-proc-info 00:02:04.488 [711/723] Linking target app/dpdk-test-acl 00:02:04.488 [712/723] Linking target app/dpdk-test-dma-perf 00:02:04.488 [713/723] Linking target app/dpdk-test-pipeline 00:02:04.488 [714/723] Linking target app/dpdk-test-sad 00:02:04.488 [715/723] Linking target app/dpdk-test-security-perf 00:02:04.488 [716/723] Linking target app/dpdk-test-compress-perf 00:02:04.488 [717/723] Linking target app/dpdk-test-eventdev 00:02:04.488 [718/723] Linking target app/dpdk-graph 00:02:04.746 [719/723] Linking target app/dpdk-testpmd 00:02:05.005 [720/723] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.005 [721/723] Linking target lib/librte_vhost.so.24.2 00:02:06.381 [722/723] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.381 [723/723] Linking target lib/librte_pipeline.so.24.2 00:02:06.381 08:47:44 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:06.381 08:47:44 build_native_dpdk -- common/autobuild_common.sh@191 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:06.381 08:47:44 build_native_dpdk -- common/autobuild_common.sh@204 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:06.381 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:06.381 [0/1] Installing files. 00:02:06.381 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/memory.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/cpu.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/telemetry-endpoints/counters.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/telemetry-endpoints 00:02:06.381 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:06.381 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.382 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.383 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.645 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.646 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.647 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:06.648 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_argparse.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.648 Installing lib/librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.649 Installing lib/librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.912 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing lib/librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing drivers/librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:06.913 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing drivers/librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:06.913 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing drivers/librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:06.913 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:06.913 Installing drivers/librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2 00:02:06.913 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/argparse/rte_argparse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.913 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.914 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ptr_compress/rte_ptr_compress.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.915 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.916 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.917 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.918 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry-exporter.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:06.919 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:06.919 Installing symlink pointing to librte_log.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:06.919 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:06.919 Installing symlink pointing to librte_kvargs.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:06.919 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:06.919 Installing symlink pointing to librte_argparse.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so.24 00:02:06.919 Installing symlink pointing to librte_argparse.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_argparse.so 00:02:06.919 Installing symlink pointing to librte_telemetry.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:06.919 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:06.919 Installing symlink pointing to librte_eal.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:06.919 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:06.919 Installing symlink pointing to librte_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:06.919 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:06.919 Installing symlink pointing to librte_rcu.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:06.919 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:06.919 Installing symlink pointing to librte_mempool.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:06.919 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:06.919 Installing symlink pointing to librte_mbuf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:06.919 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:06.919 Installing symlink pointing to librte_net.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:06.919 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:06.919 Installing symlink pointing to librte_meter.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:06.919 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:06.919 Installing symlink pointing to librte_ethdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:06.919 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:06.919 Installing symlink pointing to librte_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:06.919 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:06.919 Installing symlink pointing to librte_cmdline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:06.919 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:06.919 Installing symlink pointing to librte_metrics.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:06.919 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:06.919 Installing symlink pointing to librte_hash.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:06.919 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:06.919 Installing symlink pointing to librte_timer.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:06.919 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:06.919 Installing symlink pointing to librte_acl.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:06.919 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:06.919 Installing symlink pointing to librte_bbdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:06.919 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:06.919 Installing symlink pointing to librte_bitratestats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:06.919 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:06.919 Installing symlink pointing to librte_bpf.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:06.919 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:06.919 Installing symlink pointing to librte_cfgfile.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:06.920 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:06.920 Installing symlink pointing to librte_compressdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:06.920 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:06.920 Installing symlink pointing to librte_cryptodev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:06.920 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:06.920 Installing symlink pointing to librte_distributor.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:06.920 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:06.920 Installing symlink pointing to librte_dmadev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:06.920 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:06.920 Installing symlink pointing to librte_efd.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:06.920 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:06.920 Installing symlink pointing to librte_eventdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:06.920 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:06.920 Installing symlink pointing to librte_dispatcher.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:06.920 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:06.920 Installing symlink pointing to librte_gpudev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:06.920 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:06.920 Installing symlink pointing to librte_gro.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:06.920 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:06.920 Installing symlink pointing to librte_gso.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:06.920 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:06.920 Installing symlink pointing to librte_ip_frag.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:06.920 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:06.920 Installing symlink pointing to librte_jobstats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:06.920 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:06.920 Installing symlink pointing to librte_latencystats.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:06.920 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:06.920 Installing symlink pointing to librte_lpm.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:06.920 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:06.920 Installing symlink pointing to librte_member.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:06.920 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:06.920 Installing symlink pointing to librte_pcapng.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:06.920 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:06.920 Installing symlink pointing to librte_power.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:06.920 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:06.920 Installing symlink pointing to librte_rawdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:06.920 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:06.920 Installing symlink pointing to librte_regexdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:06.920 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:06.920 Installing symlink pointing to librte_mldev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:06.920 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:06.920 Installing symlink pointing to librte_rib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:06.920 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:06.920 Installing symlink pointing to librte_reorder.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:06.920 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:06.920 Installing symlink pointing to librte_sched.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:06.920 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:06.920 Installing symlink pointing to librte_security.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:06.920 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:06.920 Installing symlink pointing to librte_stack.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:06.920 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:06.920 Installing symlink pointing to librte_vhost.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:06.920 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:06.920 Installing symlink pointing to librte_ipsec.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:06.920 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:06.920 Installing symlink pointing to librte_pdcp.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:06.920 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:06.920 Installing symlink pointing to librte_fib.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:06.920 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:06.920 Installing symlink pointing to librte_port.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:06.920 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:06.920 Installing symlink pointing to librte_pdump.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:06.920 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:06.920 Installing symlink pointing to librte_table.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:06.920 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:06.920 Installing symlink pointing to librte_pipeline.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:06.920 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:06.921 Installing symlink pointing to librte_graph.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:06.921 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:06.921 Installing symlink pointing to librte_node.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:06.921 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:06.921 Installing symlink pointing to librte_bus_pci.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:02:06.921 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:02:06.921 Installing symlink pointing to librte_bus_vdev.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:02:06.921 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:02:06.921 Installing symlink pointing to librte_mempool_ring.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:02:06.921 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:02:06.921 Installing symlink pointing to librte_net_i40e.so.24.2 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:02:06.921 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:02:06.921 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:02:06.921 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:02:06.921 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:02:06.921 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:02:06.921 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:02:06.921 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:02:06.921 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:02:06.921 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:02:06.921 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:02:06.921 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:02:06.921 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:02:06.921 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:02:06.921 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:02:07.181 08:47:45 build_native_dpdk -- common/autobuild_common.sh@210 -- $ cat 00:02:07.181 08:47:45 build_native_dpdk -- common/autobuild_common.sh@215 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:07.181 00:02:07.181 real 0m39.888s 00:02:07.181 user 13m59.101s 00:02:07.181 sys 2m0.917s 00:02:07.181 08:47:45 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:07.181 08:47:45 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:07.181 ************************************ 00:02:07.181 END TEST build_native_dpdk 00:02:07.181 ************************************ 00:02:07.181 08:47:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:07.181 08:47:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:07.181 08:47:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:07.181 08:47:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:07.181 08:47:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:07.181 08:47:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:07.181 08:47:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:07.181 08:47:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:07.181 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:07.181 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:07.181 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:07.181 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:07.440 Using 'verbs' RDMA provider 00:02:18.012 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:28.077 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:28.077 Creating mk/config.mk...done. 00:02:28.077 Creating mk/cc.flags.mk...done. 00:02:28.077 Type 'make' to build. 00:02:28.077 08:48:04 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:28.077 08:48:04 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:28.077 08:48:04 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:28.077 08:48:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:28.077 ************************************ 00:02:28.077 START TEST make 00:02:28.077 ************************************ 00:02:28.077 08:48:04 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:28.077 make[1]: Nothing to be done for 'all'. 00:02:28.652 The Meson build system 00:02:28.652 Version: 1.3.1 00:02:28.652 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:28.652 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:28.652 Build type: native build 00:02:28.652 Project name: libvfio-user 00:02:28.652 Project version: 0.0.1 00:02:28.652 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:28.652 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:28.652 Host machine cpu family: x86_64 00:02:28.652 Host machine cpu: x86_64 00:02:28.652 Run-time dependency threads found: YES 00:02:28.652 Library dl found: YES 00:02:28.652 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:28.652 Run-time dependency json-c found: YES 0.17 00:02:28.652 Run-time dependency cmocka found: YES 1.1.7 00:02:28.652 Program pytest-3 found: NO 00:02:28.652 Program flake8 found: NO 00:02:28.652 Program misspell-fixer found: NO 00:02:28.652 Program restructuredtext-lint found: NO 00:02:28.652 Program valgrind found: YES (/usr/bin/valgrind) 00:02:28.652 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:28.652 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:28.652 Compiler for C supports arguments -Wwrite-strings: YES 00:02:28.652 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:28.652 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:28.652 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:28.652 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:28.652 Build targets in project: 8 00:02:28.652 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:28.652 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:28.652 00:02:28.652 libvfio-user 0.0.1 00:02:28.652 00:02:28.652 User defined options 00:02:28.652 buildtype : debug 00:02:28.652 default_library: shared 00:02:28.652 libdir : /usr/local/lib 00:02:28.652 00:02:28.652 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.241 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:29.502 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:29.502 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:29.502 [3/37] Compiling C object samples/null.p/null.c.o 00:02:29.502 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:29.502 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:29.502 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:29.502 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:29.502 [8/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:29.502 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:29.502 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:29.502 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:29.769 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:29.769 [13/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:29.769 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:29.769 [15/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:29.769 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:29.769 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:29.769 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:29.769 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:29.769 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:29.769 [21/37] Compiling C object samples/server.p/server.c.o 00:02:29.769 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:29.769 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:29.769 [24/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:29.769 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:29.769 [26/37] Compiling C object samples/client.p/client.c.o 00:02:29.769 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:29.769 [28/37] Linking target samples/client 00:02:29.769 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:02:30.030 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:30.030 [31/37] Linking target test/unit_tests 00:02:30.030 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:30.030 [33/37] Linking target samples/server 00:02:30.030 [34/37] Linking target samples/null 00:02:30.030 [35/37] Linking target samples/gpio-pci-idio-16 00:02:30.030 [36/37] Linking target samples/lspci 00:02:30.030 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:30.297 INFO: autodetecting backend as ninja 00:02:30.297 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:30.297 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:30.876 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:30.876 ninja: no work to do. 00:02:43.083 CC lib/ut/ut.o 00:02:43.083 CC lib/ut_mock/mock.o 00:02:43.083 CC lib/log/log.o 00:02:43.083 CC lib/log/log_flags.o 00:02:43.083 CC lib/log/log_deprecated.o 00:02:43.083 LIB libspdk_ut.a 00:02:43.083 LIB libspdk_log.a 00:02:43.083 LIB libspdk_ut_mock.a 00:02:43.083 SO libspdk_ut.so.2.0 00:02:43.084 SO libspdk_ut_mock.so.6.0 00:02:43.084 SO libspdk_log.so.7.0 00:02:43.084 SYMLINK libspdk_ut_mock.so 00:02:43.084 SYMLINK libspdk_ut.so 00:02:43.084 SYMLINK libspdk_log.so 00:02:43.084 CXX lib/trace_parser/trace.o 00:02:43.084 CC lib/dma/dma.o 00:02:43.084 CC lib/ioat/ioat.o 00:02:43.084 CC lib/util/base64.o 00:02:43.084 CC lib/util/bit_array.o 00:02:43.084 CC lib/util/cpuset.o 00:02:43.084 CC lib/util/crc16.o 00:02:43.084 CC lib/util/crc32.o 00:02:43.084 CC lib/util/crc32c.o 00:02:43.084 CC lib/util/crc32_ieee.o 00:02:43.084 CC lib/util/crc64.o 00:02:43.084 CC lib/util/dif.o 00:02:43.084 CC lib/util/fd.o 00:02:43.084 CC lib/util/fd_group.o 00:02:43.084 CC lib/util/file.o 00:02:43.084 CC lib/util/hexlify.o 00:02:43.084 CC lib/util/iov.o 00:02:43.084 CC lib/util/math.o 00:02:43.084 CC lib/util/net.o 00:02:43.084 CC lib/util/pipe.o 00:02:43.084 CC lib/util/strerror_tls.o 00:02:43.084 CC lib/util/string.o 00:02:43.084 CC lib/util/uuid.o 00:02:43.084 CC lib/util/xor.o 00:02:43.084 CC lib/util/zipf.o 00:02:43.084 CC lib/vfio_user/host/vfio_user_pci.o 00:02:43.084 CC lib/vfio_user/host/vfio_user.o 00:02:43.084 LIB libspdk_dma.a 00:02:43.084 SO libspdk_dma.so.4.0 00:02:43.084 SYMLINK libspdk_dma.so 00:02:43.084 LIB libspdk_ioat.a 00:02:43.084 SO libspdk_ioat.so.7.0 00:02:43.084 LIB libspdk_vfio_user.a 00:02:43.084 SYMLINK libspdk_ioat.so 00:02:43.084 SO libspdk_vfio_user.so.5.0 00:02:43.084 SYMLINK libspdk_vfio_user.so 00:02:43.084 LIB libspdk_util.a 00:02:43.084 SO libspdk_util.so.10.0 00:02:43.342 SYMLINK libspdk_util.so 00:02:43.342 CC lib/conf/conf.o 00:02:43.342 CC lib/idxd/idxd.o 00:02:43.342 CC lib/json/json_parse.o 00:02:43.342 CC lib/rdma_provider/common.o 00:02:43.342 CC lib/rdma_utils/rdma_utils.o 00:02:43.342 CC lib/vmd/vmd.o 00:02:43.342 CC lib/idxd/idxd_user.o 00:02:43.342 CC lib/json/json_util.o 00:02:43.342 CC lib/vmd/led.o 00:02:43.342 CC lib/env_dpdk/env.o 00:02:43.342 CC lib/idxd/idxd_kernel.o 00:02:43.342 CC lib/json/json_write.o 00:02:43.342 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:43.342 CC lib/env_dpdk/memory.o 00:02:43.342 CC lib/env_dpdk/pci.o 00:02:43.342 CC lib/env_dpdk/init.o 00:02:43.342 CC lib/env_dpdk/threads.o 00:02:43.342 CC lib/env_dpdk/pci_ioat.o 00:02:43.342 CC lib/env_dpdk/pci_virtio.o 00:02:43.342 CC lib/env_dpdk/pci_vmd.o 00:02:43.342 CC lib/env_dpdk/pci_idxd.o 00:02:43.342 CC lib/env_dpdk/pci_event.o 00:02:43.342 CC lib/env_dpdk/pci_dpdk.o 00:02:43.342 CC lib/env_dpdk/sigbus_handler.o 00:02:43.342 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:43.342 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:43.342 LIB libspdk_trace_parser.a 00:02:43.342 SO libspdk_trace_parser.so.5.0 00:02:43.600 SYMLINK libspdk_trace_parser.so 00:02:43.600 LIB libspdk_rdma_provider.a 00:02:43.600 SO libspdk_rdma_provider.so.6.0 00:02:43.600 LIB libspdk_conf.a 00:02:43.600 SO libspdk_conf.so.6.0 00:02:43.600 SYMLINK libspdk_rdma_provider.so 00:02:43.600 SYMLINK libspdk_conf.so 00:02:43.600 LIB libspdk_json.a 00:02:43.858 SO libspdk_json.so.6.0 00:02:43.858 LIB libspdk_rdma_utils.a 00:02:43.858 SO libspdk_rdma_utils.so.1.0 00:02:43.858 SYMLINK libspdk_json.so 00:02:43.858 SYMLINK libspdk_rdma_utils.so 00:02:43.858 LIB libspdk_idxd.a 00:02:43.858 CC lib/jsonrpc/jsonrpc_server.o 00:02:43.858 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:43.858 CC lib/jsonrpc/jsonrpc_client.o 00:02:43.858 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:44.116 SO libspdk_idxd.so.12.0 00:02:44.116 SYMLINK libspdk_idxd.so 00:02:44.116 LIB libspdk_vmd.a 00:02:44.116 SO libspdk_vmd.so.6.0 00:02:44.116 SYMLINK libspdk_vmd.so 00:02:44.116 LIB libspdk_jsonrpc.a 00:02:44.374 SO libspdk_jsonrpc.so.6.0 00:02:44.374 SYMLINK libspdk_jsonrpc.so 00:02:44.633 CC lib/rpc/rpc.o 00:02:44.633 LIB libspdk_rpc.a 00:02:44.633 SO libspdk_rpc.so.6.0 00:02:44.891 SYMLINK libspdk_rpc.so 00:02:44.891 LIB libspdk_env_dpdk.a 00:02:44.891 CC lib/notify/notify.o 00:02:44.891 CC lib/notify/notify_rpc.o 00:02:44.891 CC lib/keyring/keyring.o 00:02:44.891 CC lib/trace/trace.o 00:02:44.891 CC lib/keyring/keyring_rpc.o 00:02:44.891 CC lib/trace/trace_flags.o 00:02:44.891 CC lib/trace/trace_rpc.o 00:02:44.891 SO libspdk_env_dpdk.so.15.0 00:02:45.149 LIB libspdk_notify.a 00:02:45.149 SYMLINK libspdk_env_dpdk.so 00:02:45.149 SO libspdk_notify.so.6.0 00:02:45.149 LIB libspdk_keyring.a 00:02:45.149 SYMLINK libspdk_notify.so 00:02:45.149 LIB libspdk_trace.a 00:02:45.149 SO libspdk_keyring.so.1.0 00:02:45.149 SO libspdk_trace.so.10.0 00:02:45.412 SYMLINK libspdk_keyring.so 00:02:45.412 SYMLINK libspdk_trace.so 00:02:45.412 CC lib/sock/sock.o 00:02:45.412 CC lib/sock/sock_rpc.o 00:02:45.412 CC lib/thread/thread.o 00:02:45.412 CC lib/thread/iobuf.o 00:02:45.979 LIB libspdk_sock.a 00:02:45.979 SO libspdk_sock.so.10.0 00:02:45.979 SYMLINK libspdk_sock.so 00:02:46.236 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:46.236 CC lib/nvme/nvme_ctrlr.o 00:02:46.236 CC lib/nvme/nvme_fabric.o 00:02:46.236 CC lib/nvme/nvme_ns_cmd.o 00:02:46.236 CC lib/nvme/nvme_ns.o 00:02:46.237 CC lib/nvme/nvme_pcie_common.o 00:02:46.237 CC lib/nvme/nvme_pcie.o 00:02:46.237 CC lib/nvme/nvme_qpair.o 00:02:46.237 CC lib/nvme/nvme.o 00:02:46.237 CC lib/nvme/nvme_quirks.o 00:02:46.237 CC lib/nvme/nvme_transport.o 00:02:46.237 CC lib/nvme/nvme_discovery.o 00:02:46.237 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:46.237 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:46.237 CC lib/nvme/nvme_tcp.o 00:02:46.237 CC lib/nvme/nvme_opal.o 00:02:46.237 CC lib/nvme/nvme_io_msg.o 00:02:46.237 CC lib/nvme/nvme_poll_group.o 00:02:46.237 CC lib/nvme/nvme_zns.o 00:02:46.237 CC lib/nvme/nvme_stubs.o 00:02:46.237 CC lib/nvme/nvme_auth.o 00:02:46.237 CC lib/nvme/nvme_cuse.o 00:02:46.237 CC lib/nvme/nvme_rdma.o 00:02:46.237 CC lib/nvme/nvme_vfio_user.o 00:02:47.173 LIB libspdk_thread.a 00:02:47.173 SO libspdk_thread.so.10.1 00:02:47.173 SYMLINK libspdk_thread.so 00:02:47.173 CC lib/blob/blobstore.o 00:02:47.173 CC lib/accel/accel.o 00:02:47.173 CC lib/init/json_config.o 00:02:47.173 CC lib/virtio/virtio.o 00:02:47.173 CC lib/vfu_tgt/tgt_endpoint.o 00:02:47.173 CC lib/accel/accel_rpc.o 00:02:47.173 CC lib/virtio/virtio_vhost_user.o 00:02:47.173 CC lib/blob/request.o 00:02:47.173 CC lib/init/subsystem.o 00:02:47.173 CC lib/vfu_tgt/tgt_rpc.o 00:02:47.173 CC lib/accel/accel_sw.o 00:02:47.173 CC lib/virtio/virtio_vfio_user.o 00:02:47.173 CC lib/init/subsystem_rpc.o 00:02:47.173 CC lib/blob/zeroes.o 00:02:47.173 CC lib/virtio/virtio_pci.o 00:02:47.173 CC lib/blob/blob_bs_dev.o 00:02:47.173 CC lib/init/rpc.o 00:02:47.431 LIB libspdk_init.a 00:02:47.689 SO libspdk_init.so.5.0 00:02:47.689 LIB libspdk_virtio.a 00:02:47.689 LIB libspdk_vfu_tgt.a 00:02:47.689 SYMLINK libspdk_init.so 00:02:47.689 SO libspdk_virtio.so.7.0 00:02:47.689 SO libspdk_vfu_tgt.so.3.0 00:02:47.689 SYMLINK libspdk_vfu_tgt.so 00:02:47.689 SYMLINK libspdk_virtio.so 00:02:47.689 CC lib/event/app.o 00:02:47.689 CC lib/event/reactor.o 00:02:47.689 CC lib/event/log_rpc.o 00:02:47.689 CC lib/event/app_rpc.o 00:02:47.689 CC lib/event/scheduler_static.o 00:02:48.254 LIB libspdk_event.a 00:02:48.254 SO libspdk_event.so.14.0 00:02:48.254 LIB libspdk_accel.a 00:02:48.254 SYMLINK libspdk_event.so 00:02:48.254 SO libspdk_accel.so.16.0 00:02:48.512 SYMLINK libspdk_accel.so 00:02:48.512 LIB libspdk_nvme.a 00:02:48.512 CC lib/bdev/bdev.o 00:02:48.512 CC lib/bdev/bdev_rpc.o 00:02:48.512 CC lib/bdev/bdev_zone.o 00:02:48.512 CC lib/bdev/part.o 00:02:48.512 CC lib/bdev/scsi_nvme.o 00:02:48.771 SO libspdk_nvme.so.13.1 00:02:49.050 SYMLINK libspdk_nvme.so 00:02:50.428 LIB libspdk_blob.a 00:02:50.428 SO libspdk_blob.so.11.0 00:02:50.428 SYMLINK libspdk_blob.so 00:02:50.428 CC lib/lvol/lvol.o 00:02:50.428 CC lib/blobfs/blobfs.o 00:02:50.428 CC lib/blobfs/tree.o 00:02:50.995 LIB libspdk_bdev.a 00:02:51.253 SO libspdk_bdev.so.16.0 00:02:51.253 SYMLINK libspdk_bdev.so 00:02:51.253 LIB libspdk_blobfs.a 00:02:51.253 SO libspdk_blobfs.so.10.0 00:02:51.515 CC lib/scsi/dev.o 00:02:51.515 CC lib/ublk/ublk.o 00:02:51.515 CC lib/nvmf/ctrlr.o 00:02:51.515 LIB libspdk_lvol.a 00:02:51.515 CC lib/nbd/nbd.o 00:02:51.515 CC lib/scsi/lun.o 00:02:51.515 CC lib/ftl/ftl_core.o 00:02:51.515 CC lib/nvmf/ctrlr_discovery.o 00:02:51.515 CC lib/nbd/nbd_rpc.o 00:02:51.515 CC lib/ublk/ublk_rpc.o 00:02:51.515 CC lib/scsi/port.o 00:02:51.515 CC lib/nvmf/ctrlr_bdev.o 00:02:51.515 CC lib/ftl/ftl_init.o 00:02:51.515 CC lib/nvmf/subsystem.o 00:02:51.515 CC lib/scsi/scsi.o 00:02:51.515 CC lib/ftl/ftl_layout.o 00:02:51.515 CC lib/scsi/scsi_bdev.o 00:02:51.515 CC lib/nvmf/nvmf.o 00:02:51.515 CC lib/ftl/ftl_debug.o 00:02:51.515 CC lib/scsi/scsi_pr.o 00:02:51.515 CC lib/nvmf/nvmf_rpc.o 00:02:51.515 CC lib/ftl/ftl_io.o 00:02:51.515 CC lib/ftl/ftl_sb.o 00:02:51.515 CC lib/scsi/scsi_rpc.o 00:02:51.515 CC lib/scsi/task.o 00:02:51.515 CC lib/nvmf/transport.o 00:02:51.515 CC lib/ftl/ftl_l2p.o 00:02:51.515 CC lib/nvmf/tcp.o 00:02:51.515 CC lib/ftl/ftl_l2p_flat.o 00:02:51.515 CC lib/ftl/ftl_nv_cache.o 00:02:51.515 CC lib/nvmf/stubs.o 00:02:51.515 CC lib/ftl/ftl_band.o 00:02:51.515 CC lib/nvmf/mdns_server.o 00:02:51.515 CC lib/ftl/ftl_band_ops.o 00:02:51.515 CC lib/ftl/ftl_writer.o 00:02:51.515 CC lib/nvmf/vfio_user.o 00:02:51.515 CC lib/nvmf/rdma.o 00:02:51.515 CC lib/ftl/ftl_rq.o 00:02:51.515 CC lib/ftl/ftl_reloc.o 00:02:51.515 CC lib/nvmf/auth.o 00:02:51.515 CC lib/ftl/ftl_l2p_cache.o 00:02:51.515 CC lib/ftl/ftl_p2l.o 00:02:51.515 CC lib/ftl/mngt/ftl_mngt.o 00:02:51.515 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:51.515 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:51.515 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:51.515 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:51.515 SYMLINK libspdk_blobfs.so 00:02:51.515 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:51.515 SO libspdk_lvol.so.10.0 00:02:51.515 SYMLINK libspdk_lvol.so 00:02:51.515 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:51.779 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:51.779 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:51.779 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:51.779 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:51.779 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:51.779 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:51.779 CC lib/ftl/utils/ftl_conf.o 00:02:51.779 CC lib/ftl/utils/ftl_md.o 00:02:51.779 CC lib/ftl/utils/ftl_mempool.o 00:02:51.779 CC lib/ftl/utils/ftl_bitmap.o 00:02:51.779 CC lib/ftl/utils/ftl_property.o 00:02:51.779 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:51.779 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.040 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.040 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.040 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.040 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.040 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:52.040 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.040 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.040 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.040 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:52.040 CC lib/ftl/base/ftl_base_dev.o 00:02:52.040 CC lib/ftl/base/ftl_base_bdev.o 00:02:52.040 CC lib/ftl/ftl_trace.o 00:02:52.297 LIB libspdk_nbd.a 00:02:52.297 SO libspdk_nbd.so.7.0 00:02:52.297 LIB libspdk_scsi.a 00:02:52.297 SYMLINK libspdk_nbd.so 00:02:52.297 SO libspdk_scsi.so.9.0 00:02:52.555 LIB libspdk_ublk.a 00:02:52.555 SYMLINK libspdk_scsi.so 00:02:52.555 SO libspdk_ublk.so.3.0 00:02:52.555 SYMLINK libspdk_ublk.so 00:02:52.555 CC lib/vhost/vhost.o 00:02:52.555 CC lib/iscsi/conn.o 00:02:52.555 CC lib/iscsi/init_grp.o 00:02:52.555 CC lib/vhost/vhost_rpc.o 00:02:52.555 CC lib/vhost/vhost_scsi.o 00:02:52.555 CC lib/iscsi/iscsi.o 00:02:52.555 CC lib/iscsi/md5.o 00:02:52.555 CC lib/vhost/vhost_blk.o 00:02:52.555 CC lib/iscsi/param.o 00:02:52.555 CC lib/iscsi/portal_grp.o 00:02:52.555 CC lib/vhost/rte_vhost_user.o 00:02:52.555 CC lib/iscsi/tgt_node.o 00:02:52.555 CC lib/iscsi/iscsi_subsystem.o 00:02:52.555 CC lib/iscsi/iscsi_rpc.o 00:02:52.555 CC lib/iscsi/task.o 00:02:52.813 LIB libspdk_ftl.a 00:02:53.071 SO libspdk_ftl.so.9.0 00:02:53.329 SYMLINK libspdk_ftl.so 00:02:53.894 LIB libspdk_vhost.a 00:02:53.894 SO libspdk_vhost.so.8.0 00:02:53.894 LIB libspdk_nvmf.a 00:02:53.894 SYMLINK libspdk_vhost.so 00:02:54.153 SO libspdk_nvmf.so.19.0 00:02:54.153 LIB libspdk_iscsi.a 00:02:54.153 SO libspdk_iscsi.so.8.0 00:02:54.153 SYMLINK libspdk_nvmf.so 00:02:54.411 SYMLINK libspdk_iscsi.so 00:02:54.670 CC module/env_dpdk/env_dpdk_rpc.o 00:02:54.670 CC module/vfu_device/vfu_virtio.o 00:02:54.670 CC module/vfu_device/vfu_virtio_blk.o 00:02:54.670 CC module/vfu_device/vfu_virtio_scsi.o 00:02:54.670 CC module/vfu_device/vfu_virtio_rpc.o 00:02:54.670 CC module/sock/posix/posix.o 00:02:54.670 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:54.670 CC module/accel/dsa/accel_dsa.o 00:02:54.670 CC module/accel/dsa/accel_dsa_rpc.o 00:02:54.670 CC module/accel/error/accel_error.o 00:02:54.670 CC module/scheduler/gscheduler/gscheduler.o 00:02:54.670 CC module/accel/error/accel_error_rpc.o 00:02:54.670 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:54.670 CC module/keyring/linux/keyring.o 00:02:54.670 CC module/accel/ioat/accel_ioat.o 00:02:54.670 CC module/keyring/file/keyring.o 00:02:54.670 CC module/accel/ioat/accel_ioat_rpc.o 00:02:54.670 CC module/keyring/linux/keyring_rpc.o 00:02:54.670 CC module/keyring/file/keyring_rpc.o 00:02:54.670 CC module/accel/iaa/accel_iaa.o 00:02:54.670 CC module/blob/bdev/blob_bdev.o 00:02:54.670 CC module/accel/iaa/accel_iaa_rpc.o 00:02:54.670 LIB libspdk_env_dpdk_rpc.a 00:02:54.670 SO libspdk_env_dpdk_rpc.so.6.0 00:02:54.929 SYMLINK libspdk_env_dpdk_rpc.so 00:02:54.929 LIB libspdk_keyring_linux.a 00:02:54.929 LIB libspdk_keyring_file.a 00:02:54.929 LIB libspdk_scheduler_gscheduler.a 00:02:54.929 LIB libspdk_scheduler_dpdk_governor.a 00:02:54.929 SO libspdk_keyring_linux.so.1.0 00:02:54.929 SO libspdk_keyring_file.so.1.0 00:02:54.929 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:54.929 SO libspdk_scheduler_gscheduler.so.4.0 00:02:54.929 LIB libspdk_accel_error.a 00:02:54.929 LIB libspdk_accel_ioat.a 00:02:54.929 LIB libspdk_scheduler_dynamic.a 00:02:54.929 SO libspdk_accel_error.so.2.0 00:02:54.929 LIB libspdk_accel_iaa.a 00:02:54.929 SO libspdk_accel_ioat.so.6.0 00:02:54.929 SO libspdk_scheduler_dynamic.so.4.0 00:02:54.929 SYMLINK libspdk_keyring_file.so 00:02:54.929 SYMLINK libspdk_keyring_linux.so 00:02:54.929 SYMLINK libspdk_scheduler_gscheduler.so 00:02:54.929 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:54.929 SO libspdk_accel_iaa.so.3.0 00:02:54.929 SYMLINK libspdk_accel_error.so 00:02:54.929 LIB libspdk_accel_dsa.a 00:02:54.929 SYMLINK libspdk_accel_ioat.so 00:02:54.929 LIB libspdk_blob_bdev.a 00:02:54.929 SYMLINK libspdk_scheduler_dynamic.so 00:02:54.929 SO libspdk_accel_dsa.so.5.0 00:02:54.929 SO libspdk_blob_bdev.so.11.0 00:02:54.929 SYMLINK libspdk_accel_iaa.so 00:02:55.187 SYMLINK libspdk_blob_bdev.so 00:02:55.187 SYMLINK libspdk_accel_dsa.so 00:02:55.187 LIB libspdk_vfu_device.a 00:02:55.187 SO libspdk_vfu_device.so.3.0 00:02:55.449 CC module/blobfs/bdev/blobfs_bdev.o 00:02:55.449 CC module/bdev/gpt/gpt.o 00:02:55.449 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:55.449 CC module/bdev/gpt/vbdev_gpt.o 00:02:55.449 CC module/bdev/null/bdev_null.o 00:02:55.449 CC module/bdev/lvol/vbdev_lvol.o 00:02:55.449 CC module/bdev/error/vbdev_error.o 00:02:55.449 CC module/bdev/delay/vbdev_delay.o 00:02:55.449 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:55.449 CC module/bdev/null/bdev_null_rpc.o 00:02:55.449 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:55.449 CC module/bdev/passthru/vbdev_passthru.o 00:02:55.449 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:55.449 CC module/bdev/error/vbdev_error_rpc.o 00:02:55.449 CC module/bdev/raid/bdev_raid.o 00:02:55.449 CC module/bdev/nvme/bdev_nvme.o 00:02:55.449 CC module/bdev/split/vbdev_split.o 00:02:55.449 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:55.449 CC module/bdev/malloc/bdev_malloc.o 00:02:55.449 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:55.449 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:55.449 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:55.449 CC module/bdev/raid/bdev_raid_rpc.o 00:02:55.449 CC module/bdev/nvme/nvme_rpc.o 00:02:55.449 CC module/bdev/ftl/bdev_ftl.o 00:02:55.449 CC module/bdev/raid/bdev_raid_sb.o 00:02:55.449 CC module/bdev/raid/raid0.o 00:02:55.449 CC module/bdev/split/vbdev_split_rpc.o 00:02:55.449 CC module/bdev/iscsi/bdev_iscsi.o 00:02:55.449 CC module/bdev/nvme/bdev_mdns_client.o 00:02:55.449 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:55.449 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:55.449 CC module/bdev/aio/bdev_aio.o 00:02:55.449 CC module/bdev/nvme/vbdev_opal.o 00:02:55.449 CC module/bdev/aio/bdev_aio_rpc.o 00:02:55.449 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:55.449 CC module/bdev/raid/raid1.o 00:02:55.449 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:55.449 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:55.449 CC module/bdev/raid/concat.o 00:02:55.449 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:55.449 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:55.449 SYMLINK libspdk_vfu_device.so 00:02:55.707 LIB libspdk_sock_posix.a 00:02:55.707 SO libspdk_sock_posix.so.6.0 00:02:55.707 LIB libspdk_blobfs_bdev.a 00:02:55.707 SYMLINK libspdk_sock_posix.so 00:02:55.707 SO libspdk_blobfs_bdev.so.6.0 00:02:55.707 LIB libspdk_bdev_split.a 00:02:55.707 LIB libspdk_bdev_gpt.a 00:02:55.707 SYMLINK libspdk_blobfs_bdev.so 00:02:55.707 SO libspdk_bdev_split.so.6.0 00:02:55.707 SO libspdk_bdev_gpt.so.6.0 00:02:55.707 LIB libspdk_bdev_error.a 00:02:55.965 LIB libspdk_bdev_null.a 00:02:55.965 SYMLINK libspdk_bdev_split.so 00:02:55.965 SO libspdk_bdev_error.so.6.0 00:02:55.965 SYMLINK libspdk_bdev_gpt.so 00:02:55.965 SO libspdk_bdev_null.so.6.0 00:02:55.965 LIB libspdk_bdev_ftl.a 00:02:55.965 LIB libspdk_bdev_passthru.a 00:02:55.965 LIB libspdk_bdev_delay.a 00:02:55.965 SO libspdk_bdev_ftl.so.6.0 00:02:55.965 SYMLINK libspdk_bdev_error.so 00:02:55.965 SYMLINK libspdk_bdev_null.so 00:02:55.965 SO libspdk_bdev_passthru.so.6.0 00:02:55.965 SO libspdk_bdev_delay.so.6.0 00:02:55.965 LIB libspdk_bdev_aio.a 00:02:55.965 LIB libspdk_bdev_zone_block.a 00:02:55.965 LIB libspdk_bdev_iscsi.a 00:02:55.965 LIB libspdk_bdev_malloc.a 00:02:55.965 SYMLINK libspdk_bdev_ftl.so 00:02:55.965 SO libspdk_bdev_aio.so.6.0 00:02:55.965 SO libspdk_bdev_iscsi.so.6.0 00:02:55.965 SO libspdk_bdev_zone_block.so.6.0 00:02:55.965 SYMLINK libspdk_bdev_passthru.so 00:02:55.965 SO libspdk_bdev_malloc.so.6.0 00:02:55.965 SYMLINK libspdk_bdev_delay.so 00:02:55.965 SYMLINK libspdk_bdev_zone_block.so 00:02:55.965 SYMLINK libspdk_bdev_aio.so 00:02:55.965 SYMLINK libspdk_bdev_iscsi.so 00:02:55.965 SYMLINK libspdk_bdev_malloc.so 00:02:55.965 LIB libspdk_bdev_virtio.a 00:02:55.965 LIB libspdk_bdev_lvol.a 00:02:55.965 SO libspdk_bdev_virtio.so.6.0 00:02:56.223 SO libspdk_bdev_lvol.so.6.0 00:02:56.223 SYMLINK libspdk_bdev_virtio.so 00:02:56.223 SYMLINK libspdk_bdev_lvol.so 00:02:56.481 LIB libspdk_bdev_raid.a 00:02:56.739 SO libspdk_bdev_raid.so.6.0 00:02:56.739 SYMLINK libspdk_bdev_raid.so 00:02:57.674 LIB libspdk_bdev_nvme.a 00:02:57.931 SO libspdk_bdev_nvme.so.7.0 00:02:57.931 SYMLINK libspdk_bdev_nvme.so 00:02:58.189 CC module/event/subsystems/vmd/vmd.o 00:02:58.189 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.189 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.189 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.189 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:58.189 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.189 CC module/event/subsystems/sock/sock.o 00:02:58.189 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:58.189 CC module/event/subsystems/keyring/keyring.o 00:02:58.448 LIB libspdk_event_keyring.a 00:02:58.448 LIB libspdk_event_vhost_blk.a 00:02:58.448 LIB libspdk_event_sock.a 00:02:58.448 LIB libspdk_event_vmd.a 00:02:58.448 LIB libspdk_event_vfu_tgt.a 00:02:58.448 LIB libspdk_event_scheduler.a 00:02:58.448 LIB libspdk_event_iobuf.a 00:02:58.448 SO libspdk_event_keyring.so.1.0 00:02:58.448 SO libspdk_event_sock.so.5.0 00:02:58.448 SO libspdk_event_vhost_blk.so.3.0 00:02:58.448 SO libspdk_event_scheduler.so.4.0 00:02:58.448 SO libspdk_event_vfu_tgt.so.3.0 00:02:58.448 SO libspdk_event_vmd.so.6.0 00:02:58.448 SO libspdk_event_iobuf.so.3.0 00:02:58.448 SYMLINK libspdk_event_keyring.so 00:02:58.448 SYMLINK libspdk_event_sock.so 00:02:58.448 SYMLINK libspdk_event_vhost_blk.so 00:02:58.448 SYMLINK libspdk_event_scheduler.so 00:02:58.448 SYMLINK libspdk_event_vfu_tgt.so 00:02:58.448 SYMLINK libspdk_event_vmd.so 00:02:58.448 SYMLINK libspdk_event_iobuf.so 00:02:58.707 CC module/event/subsystems/accel/accel.o 00:02:58.707 LIB libspdk_event_accel.a 00:02:58.965 SO libspdk_event_accel.so.6.0 00:02:58.965 SYMLINK libspdk_event_accel.so 00:02:58.965 CC module/event/subsystems/bdev/bdev.o 00:02:59.223 LIB libspdk_event_bdev.a 00:02:59.223 SO libspdk_event_bdev.so.6.0 00:02:59.223 SYMLINK libspdk_event_bdev.so 00:02:59.481 CC module/event/subsystems/scsi/scsi.o 00:02:59.481 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:59.481 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:59.481 CC module/event/subsystems/ublk/ublk.o 00:02:59.481 CC module/event/subsystems/nbd/nbd.o 00:02:59.740 LIB libspdk_event_nbd.a 00:02:59.740 LIB libspdk_event_ublk.a 00:02:59.740 LIB libspdk_event_scsi.a 00:02:59.740 SO libspdk_event_nbd.so.6.0 00:02:59.740 SO libspdk_event_ublk.so.3.0 00:02:59.740 SO libspdk_event_scsi.so.6.0 00:02:59.740 SYMLINK libspdk_event_nbd.so 00:02:59.740 SYMLINK libspdk_event_ublk.so 00:02:59.740 SYMLINK libspdk_event_scsi.so 00:02:59.740 LIB libspdk_event_nvmf.a 00:02:59.740 SO libspdk_event_nvmf.so.6.0 00:02:59.740 SYMLINK libspdk_event_nvmf.so 00:02:59.998 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.998 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:59.998 LIB libspdk_event_vhost_scsi.a 00:02:59.998 LIB libspdk_event_iscsi.a 00:02:59.998 SO libspdk_event_vhost_scsi.so.3.0 00:02:59.998 SO libspdk_event_iscsi.so.6.0 00:02:59.998 SYMLINK libspdk_event_vhost_scsi.so 00:03:00.256 SYMLINK libspdk_event_iscsi.so 00:03:00.256 SO libspdk.so.6.0 00:03:00.256 SYMLINK libspdk.so 00:03:00.519 CXX app/trace/trace.o 00:03:00.519 CC app/trace_record/trace_record.o 00:03:00.519 CC test/rpc_client/rpc_client_test.o 00:03:00.519 CC app/spdk_nvme_perf/perf.o 00:03:00.519 CC app/spdk_top/spdk_top.o 00:03:00.519 CC app/spdk_nvme_identify/identify.o 00:03:00.519 CC app/spdk_nvme_discover/discovery_aer.o 00:03:00.519 CC app/spdk_lspci/spdk_lspci.o 00:03:00.519 TEST_HEADER include/spdk/accel.h 00:03:00.519 TEST_HEADER include/spdk/accel_module.h 00:03:00.519 TEST_HEADER include/spdk/assert.h 00:03:00.519 TEST_HEADER include/spdk/barrier.h 00:03:00.519 TEST_HEADER include/spdk/base64.h 00:03:00.519 TEST_HEADER include/spdk/bdev.h 00:03:00.519 TEST_HEADER include/spdk/bdev_module.h 00:03:00.519 TEST_HEADER include/spdk/bdev_zone.h 00:03:00.519 TEST_HEADER include/spdk/bit_array.h 00:03:00.519 TEST_HEADER include/spdk/bit_pool.h 00:03:00.519 TEST_HEADER include/spdk/blob_bdev.h 00:03:00.519 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:00.519 TEST_HEADER include/spdk/blobfs.h 00:03:00.519 TEST_HEADER include/spdk/blob.h 00:03:00.519 TEST_HEADER include/spdk/config.h 00:03:00.519 TEST_HEADER include/spdk/conf.h 00:03:00.519 TEST_HEADER include/spdk/cpuset.h 00:03:00.519 TEST_HEADER include/spdk/crc16.h 00:03:00.519 TEST_HEADER include/spdk/crc32.h 00:03:00.519 TEST_HEADER include/spdk/crc64.h 00:03:00.519 TEST_HEADER include/spdk/dif.h 00:03:00.519 TEST_HEADER include/spdk/dma.h 00:03:00.519 TEST_HEADER include/spdk/endian.h 00:03:00.519 TEST_HEADER include/spdk/env_dpdk.h 00:03:00.519 TEST_HEADER include/spdk/env.h 00:03:00.519 TEST_HEADER include/spdk/event.h 00:03:00.519 TEST_HEADER include/spdk/fd_group.h 00:03:00.519 TEST_HEADER include/spdk/fd.h 00:03:00.519 TEST_HEADER include/spdk/file.h 00:03:00.519 TEST_HEADER include/spdk/ftl.h 00:03:00.519 TEST_HEADER include/spdk/gpt_spec.h 00:03:00.519 TEST_HEADER include/spdk/histogram_data.h 00:03:00.519 TEST_HEADER include/spdk/hexlify.h 00:03:00.519 TEST_HEADER include/spdk/idxd.h 00:03:00.519 TEST_HEADER include/spdk/idxd_spec.h 00:03:00.519 TEST_HEADER include/spdk/init.h 00:03:00.519 TEST_HEADER include/spdk/ioat.h 00:03:00.519 TEST_HEADER include/spdk/ioat_spec.h 00:03:00.519 TEST_HEADER include/spdk/iscsi_spec.h 00:03:00.519 TEST_HEADER include/spdk/json.h 00:03:00.519 TEST_HEADER include/spdk/jsonrpc.h 00:03:00.519 TEST_HEADER include/spdk/keyring.h 00:03:00.519 TEST_HEADER include/spdk/keyring_module.h 00:03:00.519 TEST_HEADER include/spdk/likely.h 00:03:00.519 TEST_HEADER include/spdk/log.h 00:03:00.519 TEST_HEADER include/spdk/lvol.h 00:03:00.519 TEST_HEADER include/spdk/memory.h 00:03:00.519 TEST_HEADER include/spdk/mmio.h 00:03:00.519 TEST_HEADER include/spdk/nbd.h 00:03:00.519 TEST_HEADER include/spdk/net.h 00:03:00.519 TEST_HEADER include/spdk/notify.h 00:03:00.519 TEST_HEADER include/spdk/nvme.h 00:03:00.519 TEST_HEADER include/spdk/nvme_intel.h 00:03:00.519 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:00.519 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:00.519 TEST_HEADER include/spdk/nvme_spec.h 00:03:00.519 TEST_HEADER include/spdk/nvme_zns.h 00:03:00.519 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:00.519 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:00.519 TEST_HEADER include/spdk/nvmf.h 00:03:00.519 TEST_HEADER include/spdk/nvmf_spec.h 00:03:00.519 TEST_HEADER include/spdk/nvmf_transport.h 00:03:00.519 TEST_HEADER include/spdk/opal.h 00:03:00.519 TEST_HEADER include/spdk/opal_spec.h 00:03:00.519 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:00.519 TEST_HEADER include/spdk/pci_ids.h 00:03:00.519 TEST_HEADER include/spdk/pipe.h 00:03:00.519 TEST_HEADER include/spdk/queue.h 00:03:00.519 TEST_HEADER include/spdk/reduce.h 00:03:00.519 TEST_HEADER include/spdk/rpc.h 00:03:00.519 TEST_HEADER include/spdk/scheduler.h 00:03:00.519 TEST_HEADER include/spdk/scsi.h 00:03:00.519 TEST_HEADER include/spdk/scsi_spec.h 00:03:00.519 TEST_HEADER include/spdk/sock.h 00:03:00.519 TEST_HEADER include/spdk/stdinc.h 00:03:00.519 TEST_HEADER include/spdk/string.h 00:03:00.519 TEST_HEADER include/spdk/thread.h 00:03:00.519 TEST_HEADER include/spdk/trace_parser.h 00:03:00.519 TEST_HEADER include/spdk/trace.h 00:03:00.519 CC app/spdk_dd/spdk_dd.o 00:03:00.519 TEST_HEADER include/spdk/tree.h 00:03:00.519 TEST_HEADER include/spdk/ublk.h 00:03:00.519 TEST_HEADER include/spdk/util.h 00:03:00.519 TEST_HEADER include/spdk/uuid.h 00:03:00.519 TEST_HEADER include/spdk/version.h 00:03:00.519 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:00.519 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:00.519 TEST_HEADER include/spdk/vhost.h 00:03:00.519 TEST_HEADER include/spdk/vmd.h 00:03:00.519 TEST_HEADER include/spdk/zipf.h 00:03:00.519 TEST_HEADER include/spdk/xor.h 00:03:00.519 CC app/iscsi_tgt/iscsi_tgt.o 00:03:00.519 CXX test/cpp_headers/accel.o 00:03:00.519 CXX test/cpp_headers/accel_module.o 00:03:00.519 CXX test/cpp_headers/assert.o 00:03:00.519 CXX test/cpp_headers/barrier.o 00:03:00.519 CXX test/cpp_headers/base64.o 00:03:00.519 CXX test/cpp_headers/bdev.o 00:03:00.519 CXX test/cpp_headers/bdev_module.o 00:03:00.519 CXX test/cpp_headers/bdev_zone.o 00:03:00.519 CXX test/cpp_headers/bit_array.o 00:03:00.519 CXX test/cpp_headers/bit_pool.o 00:03:00.519 CXX test/cpp_headers/blob_bdev.o 00:03:00.519 CXX test/cpp_headers/blobfs_bdev.o 00:03:00.519 CXX test/cpp_headers/blobfs.o 00:03:00.519 CXX test/cpp_headers/blob.o 00:03:00.519 CXX test/cpp_headers/conf.o 00:03:00.519 CXX test/cpp_headers/config.o 00:03:00.519 CXX test/cpp_headers/cpuset.o 00:03:00.519 CXX test/cpp_headers/crc16.o 00:03:00.519 CC app/nvmf_tgt/nvmf_main.o 00:03:00.519 CC app/spdk_tgt/spdk_tgt.o 00:03:00.519 CXX test/cpp_headers/crc32.o 00:03:00.519 CC test/thread/poller_perf/poller_perf.o 00:03:00.519 CC test/app/histogram_perf/histogram_perf.o 00:03:00.519 CC examples/util/zipf/zipf.o 00:03:00.519 CC test/app/jsoncat/jsoncat.o 00:03:00.519 CC examples/ioat/verify/verify.o 00:03:00.519 CC examples/ioat/perf/perf.o 00:03:00.519 CC test/env/memory/memory_ut.o 00:03:00.519 CC test/app/stub/stub.o 00:03:00.519 CC test/env/vtophys/vtophys.o 00:03:00.519 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.519 CC app/fio/nvme/fio_plugin.o 00:03:00.519 CC test/env/pci/pci_ut.o 00:03:00.782 CC test/dma/test_dma/test_dma.o 00:03:00.782 CC test/app/bdev_svc/bdev_svc.o 00:03:00.782 CC app/fio/bdev/fio_plugin.o 00:03:00.782 LINK spdk_lspci 00:03:00.782 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.782 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:00.782 LINK rpc_client_test 00:03:00.782 LINK interrupt_tgt 00:03:00.782 LINK spdk_nvme_discover 00:03:00.782 LINK histogram_perf 00:03:00.782 LINK poller_perf 00:03:01.047 LINK zipf 00:03:01.047 LINK vtophys 00:03:01.047 CXX test/cpp_headers/crc64.o 00:03:01.047 LINK jsoncat 00:03:01.047 LINK nvmf_tgt 00:03:01.047 CXX test/cpp_headers/dif.o 00:03:01.047 LINK spdk_trace_record 00:03:01.047 CXX test/cpp_headers/dma.o 00:03:01.047 CXX test/cpp_headers/endian.o 00:03:01.047 CXX test/cpp_headers/env_dpdk.o 00:03:01.047 LINK env_dpdk_post_init 00:03:01.047 CXX test/cpp_headers/env.o 00:03:01.047 CXX test/cpp_headers/event.o 00:03:01.047 CXX test/cpp_headers/fd_group.o 00:03:01.047 CXX test/cpp_headers/fd.o 00:03:01.047 CXX test/cpp_headers/file.o 00:03:01.047 CXX test/cpp_headers/ftl.o 00:03:01.047 CXX test/cpp_headers/gpt_spec.o 00:03:01.047 CXX test/cpp_headers/hexlify.o 00:03:01.047 LINK stub 00:03:01.047 CXX test/cpp_headers/histogram_data.o 00:03:01.047 CXX test/cpp_headers/idxd.o 00:03:01.047 LINK iscsi_tgt 00:03:01.047 LINK ioat_perf 00:03:01.047 LINK verify 00:03:01.047 LINK spdk_tgt 00:03:01.047 CXX test/cpp_headers/idxd_spec.o 00:03:01.047 LINK bdev_svc 00:03:01.047 CXX test/cpp_headers/init.o 00:03:01.047 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:01.047 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:01.047 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:01.314 LINK spdk_dd 00:03:01.314 CXX test/cpp_headers/ioat.o 00:03:01.314 CXX test/cpp_headers/iscsi_spec.o 00:03:01.314 CXX test/cpp_headers/ioat_spec.o 00:03:01.314 CXX test/cpp_headers/json.o 00:03:01.314 CXX test/cpp_headers/jsonrpc.o 00:03:01.314 CXX test/cpp_headers/keyring.o 00:03:01.314 CXX test/cpp_headers/keyring_module.o 00:03:01.314 LINK spdk_trace 00:03:01.314 CXX test/cpp_headers/likely.o 00:03:01.314 CXX test/cpp_headers/log.o 00:03:01.314 LINK pci_ut 00:03:01.314 CXX test/cpp_headers/lvol.o 00:03:01.314 CXX test/cpp_headers/memory.o 00:03:01.314 CXX test/cpp_headers/mmio.o 00:03:01.314 CXX test/cpp_headers/nbd.o 00:03:01.314 CXX test/cpp_headers/net.o 00:03:01.314 CXX test/cpp_headers/notify.o 00:03:01.314 CXX test/cpp_headers/nvme.o 00:03:01.314 CXX test/cpp_headers/nvme_intel.o 00:03:01.314 CXX test/cpp_headers/nvme_ocssd.o 00:03:01.314 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:01.314 CXX test/cpp_headers/nvme_spec.o 00:03:01.314 CXX test/cpp_headers/nvme_zns.o 00:03:01.314 CXX test/cpp_headers/nvmf_cmd.o 00:03:01.314 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:01.580 LINK test_dma 00:03:01.580 CXX test/cpp_headers/nvmf.o 00:03:01.580 CXX test/cpp_headers/nvmf_spec.o 00:03:01.580 CXX test/cpp_headers/nvmf_transport.o 00:03:01.580 CXX test/cpp_headers/opal.o 00:03:01.580 CXX test/cpp_headers/opal_spec.o 00:03:01.580 CXX test/cpp_headers/pci_ids.o 00:03:01.580 CXX test/cpp_headers/pipe.o 00:03:01.580 CXX test/cpp_headers/queue.o 00:03:01.580 CXX test/cpp_headers/reduce.o 00:03:01.580 LINK nvme_fuzz 00:03:01.580 CC examples/sock/hello_world/hello_sock.o 00:03:01.580 CC test/event/event_perf/event_perf.o 00:03:01.580 CC test/event/reactor/reactor.o 00:03:01.580 CXX test/cpp_headers/rpc.o 00:03:01.580 CC examples/thread/thread/thread_ex.o 00:03:01.845 CC examples/idxd/perf/perf.o 00:03:01.845 LINK spdk_bdev 00:03:01.845 CC examples/vmd/lsvmd/lsvmd.o 00:03:01.845 CC examples/vmd/led/led.o 00:03:01.845 CXX test/cpp_headers/scheduler.o 00:03:01.845 LINK spdk_nvme 00:03:01.845 CXX test/cpp_headers/scsi.o 00:03:01.845 CXX test/cpp_headers/scsi_spec.o 00:03:01.845 CXX test/cpp_headers/sock.o 00:03:01.845 CXX test/cpp_headers/stdinc.o 00:03:01.845 CC test/event/reactor_perf/reactor_perf.o 00:03:01.845 CXX test/cpp_headers/string.o 00:03:01.845 CXX test/cpp_headers/thread.o 00:03:01.845 CXX test/cpp_headers/trace.o 00:03:01.845 CXX test/cpp_headers/trace_parser.o 00:03:01.845 CXX test/cpp_headers/tree.o 00:03:01.845 CXX test/cpp_headers/ublk.o 00:03:01.845 CC test/event/app_repeat/app_repeat.o 00:03:01.845 CXX test/cpp_headers/util.o 00:03:01.845 CXX test/cpp_headers/uuid.o 00:03:01.845 CXX test/cpp_headers/version.o 00:03:01.845 LINK vhost_fuzz 00:03:01.845 CXX test/cpp_headers/vfio_user_pci.o 00:03:01.845 CC test/event/scheduler/scheduler.o 00:03:01.845 CXX test/cpp_headers/vfio_user_spec.o 00:03:01.845 CXX test/cpp_headers/vhost.o 00:03:01.845 CXX test/cpp_headers/vmd.o 00:03:01.845 LINK spdk_nvme_perf 00:03:01.845 CXX test/cpp_headers/xor.o 00:03:01.845 CXX test/cpp_headers/zipf.o 00:03:01.845 LINK mem_callbacks 00:03:02.107 LINK event_perf 00:03:02.107 LINK reactor 00:03:02.107 LINK lsvmd 00:03:02.107 CC app/vhost/vhost.o 00:03:02.107 LINK led 00:03:02.107 LINK spdk_top 00:03:02.107 LINK reactor_perf 00:03:02.107 LINK app_repeat 00:03:02.107 LINK hello_sock 00:03:02.107 CC test/nvme/aer/aer.o 00:03:02.107 LINK thread 00:03:02.107 CC test/nvme/reset/reset.o 00:03:02.107 CC test/nvme/overhead/overhead.o 00:03:02.107 CC test/nvme/e2edp/nvme_dp.o 00:03:02.107 CC test/nvme/err_injection/err_injection.o 00:03:02.107 CC test/nvme/sgl/sgl.o 00:03:02.107 CC test/nvme/simple_copy/simple_copy.o 00:03:02.107 CC test/nvme/reserve/reserve.o 00:03:02.107 LINK spdk_nvme_identify 00:03:02.107 CC test/nvme/startup/startup.o 00:03:02.368 CC test/accel/dif/dif.o 00:03:02.368 CC test/blobfs/mkfs/mkfs.o 00:03:02.368 CC test/nvme/connect_stress/connect_stress.o 00:03:02.368 CC test/nvme/compliance/nvme_compliance.o 00:03:02.368 CC test/nvme/boot_partition/boot_partition.o 00:03:02.368 CC test/nvme/fused_ordering/fused_ordering.o 00:03:02.368 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:02.368 CC test/nvme/fdp/fdp.o 00:03:02.368 CC test/lvol/esnap/esnap.o 00:03:02.368 CC test/nvme/cuse/cuse.o 00:03:02.368 LINK idxd_perf 00:03:02.368 LINK scheduler 00:03:02.368 LINK vhost 00:03:02.368 LINK startup 00:03:02.627 LINK boot_partition 00:03:02.627 LINK connect_stress 00:03:02.627 LINK reset 00:03:02.627 LINK sgl 00:03:02.627 LINK reserve 00:03:02.627 LINK err_injection 00:03:02.627 LINK fused_ordering 00:03:02.627 LINK doorbell_aers 00:03:02.627 LINK aer 00:03:02.627 LINK mkfs 00:03:02.627 LINK memory_ut 00:03:02.627 LINK overhead 00:03:02.627 LINK simple_copy 00:03:02.627 LINK nvme_compliance 00:03:02.627 CC examples/nvme/arbitration/arbitration.o 00:03:02.627 CC examples/nvme/hotplug/hotplug.o 00:03:02.627 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:02.627 CC examples/nvme/abort/abort.o 00:03:02.627 CC examples/nvme/hello_world/hello_world.o 00:03:02.627 CC examples/nvme/reconnect/reconnect.o 00:03:02.627 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:02.627 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:02.627 LINK nvme_dp 00:03:02.627 LINK fdp 00:03:02.886 CC examples/accel/perf/accel_perf.o 00:03:02.886 CC examples/blob/hello_world/hello_blob.o 00:03:02.886 CC examples/blob/cli/blobcli.o 00:03:02.886 LINK cmb_copy 00:03:02.886 LINK pmr_persistence 00:03:02.886 LINK dif 00:03:03.143 LINK hello_world 00:03:03.143 LINK hotplug 00:03:03.143 LINK reconnect 00:03:03.143 LINK hello_blob 00:03:03.143 LINK abort 00:03:03.143 LINK arbitration 00:03:03.401 LINK nvme_manage 00:03:03.401 LINK accel_perf 00:03:03.401 LINK blobcli 00:03:03.401 CC test/bdev/bdevio/bdevio.o 00:03:03.659 LINK iscsi_fuzz 00:03:03.659 CC examples/bdev/hello_world/hello_bdev.o 00:03:03.659 CC examples/bdev/bdevperf/bdevperf.o 00:03:03.918 LINK bdevio 00:03:03.918 LINK cuse 00:03:03.918 LINK hello_bdev 00:03:04.485 LINK bdevperf 00:03:04.743 CC examples/nvmf/nvmf/nvmf.o 00:03:05.001 LINK nvmf 00:03:07.538 LINK esnap 00:03:07.798 00:03:07.798 real 0m40.986s 00:03:07.798 user 7m27.222s 00:03:07.798 sys 1m51.063s 00:03:07.798 08:48:45 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:07.798 08:48:45 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.798 ************************************ 00:03:07.798 END TEST make 00:03:07.798 ************************************ 00:03:07.798 08:48:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:07.798 08:48:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:07.798 08:48:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:07.798 08:48:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:07.798 08:48:45 -- pm/common@44 -- $ pid=3532307 00:03:07.798 08:48:45 -- pm/common@50 -- $ kill -TERM 3532307 00:03:07.798 08:48:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:07.798 08:48:45 -- pm/common@44 -- $ pid=3532309 00:03:07.798 08:48:45 -- pm/common@50 -- $ kill -TERM 3532309 00:03:07.798 08:48:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:07.798 08:48:45 -- pm/common@44 -- $ pid=3532311 00:03:07.798 08:48:45 -- pm/common@50 -- $ kill -TERM 3532311 00:03:07.798 08:48:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:07.798 08:48:45 -- pm/common@44 -- $ pid=3532339 00:03:07.798 08:48:45 -- pm/common@50 -- $ sudo -E kill -TERM 3532339 00:03:07.798 08:48:45 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:07.798 08:48:45 -- nvmf/common.sh@7 -- # uname -s 00:03:07.798 08:48:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:07.798 08:48:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:07.798 08:48:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:07.798 08:48:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:07.798 08:48:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:07.798 08:48:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:07.798 08:48:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:07.798 08:48:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:07.798 08:48:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:07.798 08:48:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:07.798 08:48:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:03:07.798 08:48:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:03:07.798 08:48:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:07.798 08:48:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:07.798 08:48:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:07.798 08:48:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:07.798 08:48:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:07.798 08:48:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:07.798 08:48:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.798 08:48:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.798 08:48:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.798 08:48:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.798 08:48:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.798 08:48:45 -- paths/export.sh@5 -- # export PATH 00:03:07.798 08:48:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.798 08:48:45 -- nvmf/common.sh@47 -- # : 0 00:03:07.798 08:48:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:07.798 08:48:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:07.798 08:48:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:07.798 08:48:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:07.798 08:48:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:07.798 08:48:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:07.798 08:48:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:07.798 08:48:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:07.798 08:48:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:07.798 08:48:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:07.798 08:48:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:07.798 08:48:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:07.798 08:48:45 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:07.798 08:48:45 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:07.798 08:48:45 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:07.798 08:48:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:07.798 08:48:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:07.798 08:48:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:07.798 08:48:45 -- spdk/autotest.sh@48 -- # udevadm_pid=3604170 00:03:07.798 08:48:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:07.798 08:48:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:07.798 08:48:45 -- pm/common@17 -- # local monitor 00:03:07.798 08:48:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@21 -- # date +%s 00:03:07.798 08:48:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.798 08:48:45 -- pm/common@21 -- # date +%s 00:03:07.798 08:48:45 -- pm/common@25 -- # sleep 1 00:03:07.798 08:48:45 -- pm/common@21 -- # date +%s 00:03:07.798 08:48:45 -- pm/common@21 -- # date +%s 00:03:07.798 08:48:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721803725 00:03:07.798 08:48:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721803725 00:03:07.798 08:48:45 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721803725 00:03:07.798 08:48:45 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721803725 00:03:07.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721803725_collect-vmstat.pm.log 00:03:07.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721803725_collect-cpu-load.pm.log 00:03:07.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721803725_collect-cpu-temp.pm.log 00:03:07.798 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721803725_collect-bmc-pm.bmc.pm.log 00:03:08.735 08:48:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:08.735 08:48:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:08.735 08:48:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:08.735 08:48:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.735 08:48:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:08.735 08:48:46 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:08.735 08:48:46 -- common/autotest_common.sh@10 -- # set +x 00:03:08.735 08:48:46 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:08.735 08:48:46 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:08.735 08:48:46 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:08.735 08:48:46 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:08.735 08:48:46 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:08.735 08:48:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:08.735 08:48:46 -- common/autotest_common.sh@1453 -- # uname 00:03:08.735 08:48:46 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:03:08.735 08:48:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:08.735 08:48:46 -- common/autotest_common.sh@1473 -- # uname 00:03:08.735 08:48:46 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:03:08.735 08:48:46 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:08.735 08:48:46 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:08.993 08:48:46 -- spdk/autotest.sh@72 -- # hash lcov 00:03:08.993 08:48:46 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:08.993 08:48:46 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:08.993 --rc lcov_branch_coverage=1 00:03:08.993 --rc lcov_function_coverage=1 00:03:08.993 --rc genhtml_branch_coverage=1 00:03:08.993 --rc genhtml_function_coverage=1 00:03:08.993 --rc genhtml_legend=1 00:03:08.993 --rc geninfo_all_blocks=1 00:03:08.993 ' 00:03:08.993 08:48:46 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:08.993 --rc lcov_branch_coverage=1 00:03:08.993 --rc lcov_function_coverage=1 00:03:08.993 --rc genhtml_branch_coverage=1 00:03:08.993 --rc genhtml_function_coverage=1 00:03:08.993 --rc genhtml_legend=1 00:03:08.993 --rc geninfo_all_blocks=1 00:03:08.993 ' 00:03:08.993 08:48:46 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:08.993 --rc lcov_branch_coverage=1 00:03:08.993 --rc lcov_function_coverage=1 00:03:08.993 --rc genhtml_branch_coverage=1 00:03:08.993 --rc genhtml_function_coverage=1 00:03:08.993 --rc genhtml_legend=1 00:03:08.993 --rc geninfo_all_blocks=1 00:03:08.993 --no-external' 00:03:08.993 08:48:46 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:08.993 --rc lcov_branch_coverage=1 00:03:08.993 --rc lcov_function_coverage=1 00:03:08.993 --rc genhtml_branch_coverage=1 00:03:08.993 --rc genhtml_function_coverage=1 00:03:08.993 --rc genhtml_legend=1 00:03:08.993 --rc geninfo_all_blocks=1 00:03:08.993 --no-external' 00:03:08.993 08:48:46 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:08.993 lcov: LCOV version 1.14 00:03:08.993 08:48:46 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:10.896 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:10.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:10.897 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:10.897 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:10.898 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:10.898 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:25.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:25.776 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:43.852 08:49:20 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:43.852 08:49:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:43.852 08:49:20 -- common/autotest_common.sh@10 -- # set +x 00:03:43.852 08:49:20 -- spdk/autotest.sh@91 -- # rm -f 00:03:43.852 08:49:20 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.852 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:43.852 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:43.852 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:43.852 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:43.852 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:43.852 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:43.852 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:43.852 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:43.852 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:03:43.852 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:43.852 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:43.852 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:43.852 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:43.852 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:43.852 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:43.852 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:43.852 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:43.852 08:49:21 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:43.852 08:49:21 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:43.852 08:49:21 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:43.852 08:49:21 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:43.852 08:49:21 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:43.852 08:49:21 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:43.852 08:49:21 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:43.852 08:49:21 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:43.852 08:49:21 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:43.852 08:49:21 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:43.852 08:49:21 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:43.852 08:49:21 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:43.852 08:49:21 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:43.852 08:49:21 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:43.852 08:49:21 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:43.852 No valid GPT data, bailing 00:03:43.852 08:49:21 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:43.852 08:49:21 -- scripts/common.sh@391 -- # pt= 00:03:43.852 08:49:21 -- scripts/common.sh@392 -- # return 1 00:03:43.852 08:49:21 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:43.852 1+0 records in 00:03:43.852 1+0 records out 00:03:43.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00190204 s, 551 MB/s 00:03:43.852 08:49:21 -- spdk/autotest.sh@118 -- # sync 00:03:43.852 08:49:21 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:43.852 08:49:21 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:43.852 08:49:21 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.754 08:49:23 -- spdk/autotest.sh@124 -- # uname -s 00:03:45.754 08:49:23 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:45.754 08:49:23 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:45.754 08:49:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.754 08:49:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.754 08:49:23 -- common/autotest_common.sh@10 -- # set +x 00:03:45.754 ************************************ 00:03:45.754 START TEST setup.sh 00:03:45.754 ************************************ 00:03:45.754 08:49:23 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:45.754 * Looking for test storage... 00:03:45.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.754 08:49:23 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:45.754 08:49:23 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:45.754 08:49:23 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:45.754 08:49:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.754 08:49:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.754 08:49:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.754 ************************************ 00:03:45.754 START TEST acl 00:03:45.754 ************************************ 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:45.754 * Looking for test storage... 00:03:45.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.754 08:49:23 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:45.754 08:49:23 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:45.754 08:49:23 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:45.754 08:49:23 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:45.754 08:49:23 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:45.754 08:49:23 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:45.754 08:49:23 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:45.754 08:49:23 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:45.754 08:49:23 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:47.129 08:49:25 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:47.129 08:49:25 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:47.129 08:49:25 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:47.129 08:49:25 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:47.129 08:49:25 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:47.129 08:49:25 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:48.503 Hugepages 00:03:48.503 node hugesize free / total 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 00:03:48.503 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:0b:00.0 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:48.503 08:49:26 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:48.503 08:49:26 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.503 08:49:26 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.503 08:49:26 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:48.503 ************************************ 00:03:48.503 START TEST denied 00:03:48.503 ************************************ 00:03:48.503 08:49:26 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:48.503 08:49:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:0b:00.0' 00:03:48.503 08:49:26 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:48.503 08:49:26 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:0b:00.0' 00:03:48.503 08:49:26 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:48.503 08:49:26 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.880 0000:0b:00.0 (8086 0a54): Skipping denied controller at 0000:0b:00.0 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:0b:00.0 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:0b:00.0 ]] 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:0b:00.0/driver 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:49.880 08:49:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.438 00:03:52.438 real 0m3.943s 00:03:52.438 user 0m1.168s 00:03:52.438 sys 0m1.865s 00:03:52.438 08:49:30 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.438 08:49:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:52.438 ************************************ 00:03:52.438 END TEST denied 00:03:52.438 ************************************ 00:03:52.438 08:49:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:52.438 08:49:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.438 08:49:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.438 08:49:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:52.438 ************************************ 00:03:52.438 START TEST allowed 00:03:52.438 ************************************ 00:03:52.438 08:49:30 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:52.438 08:49:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:0b:00.0 00:03:52.438 08:49:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:52.438 08:49:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:0b:00.0 .*: nvme -> .*' 00:03:52.438 08:49:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.438 08:49:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:54.973 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:54.973 08:49:32 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:54.973 08:49:32 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:54.973 08:49:32 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:54.973 08:49:32 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.973 08:49:32 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.349 00:03:56.349 real 0m3.761s 00:03:56.349 user 0m0.960s 00:03:56.349 sys 0m1.694s 00:03:56.349 08:49:34 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.349 08:49:34 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:56.349 ************************************ 00:03:56.349 END TEST allowed 00:03:56.349 ************************************ 00:03:56.349 00:03:56.349 real 0m10.490s 00:03:56.349 user 0m3.216s 00:03:56.349 sys 0m5.327s 00:03:56.349 08:49:34 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:56.349 08:49:34 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:56.349 ************************************ 00:03:56.349 END TEST acl 00:03:56.349 ************************************ 00:03:56.349 08:49:34 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:56.349 08:49:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.349 08:49:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.349 08:49:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:56.349 ************************************ 00:03:56.349 START TEST hugepages 00:03:56.349 ************************************ 00:03:56.349 08:49:34 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:56.349 * Looking for test storage... 00:03:56.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.349 08:49:34 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 37585604 kB' 'MemAvailable: 41519080 kB' 'Buffers: 3736 kB' 'Cached: 16087768 kB' 'SwapCached: 0 kB' 'Active: 12931928 kB' 'Inactive: 3692572 kB' 'Active(anon): 12492156 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 536408 kB' 'Mapped: 178548 kB' 'Shmem: 11959160 kB' 'KReclaimable: 445568 kB' 'Slab: 834956 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 389388 kB' 'KernelStack: 12880 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 13629928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.350 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:56.351 08:49:34 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:56.351 08:49:34 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:56.351 08:49:34 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.351 08:49:34 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.351 ************************************ 00:03:56.351 START TEST default_setup 00:03:56.351 ************************************ 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.352 08:49:34 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:57.729 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.729 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:57.729 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:58.674 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39697868 kB' 'MemAvailable: 43631344 kB' 'Buffers: 3736 kB' 'Cached: 16087856 kB' 'SwapCached: 0 kB' 'Active: 12950104 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510332 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554260 kB' 'Mapped: 178708 kB' 'Shmem: 11959248 kB' 'KReclaimable: 445568 kB' 'Slab: 834376 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388808 kB' 'KernelStack: 12704 kB' 'PageTables: 8080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.674 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.675 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39699076 kB' 'MemAvailable: 43632552 kB' 'Buffers: 3736 kB' 'Cached: 16087860 kB' 'SwapCached: 0 kB' 'Active: 12950332 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510560 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554512 kB' 'Mapped: 178652 kB' 'Shmem: 11959252 kB' 'KReclaimable: 445568 kB' 'Slab: 834384 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388816 kB' 'KernelStack: 12736 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.676 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.677 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39699296 kB' 'MemAvailable: 43632772 kB' 'Buffers: 3736 kB' 'Cached: 16087876 kB' 'SwapCached: 0 kB' 'Active: 12950200 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510428 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554328 kB' 'Mapped: 178568 kB' 'Shmem: 11959268 kB' 'KReclaimable: 445568 kB' 'Slab: 834360 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388792 kB' 'KernelStack: 12720 kB' 'PageTables: 7952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.678 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.679 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:58.680 nr_hugepages=1024 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.680 resv_hugepages=0 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.680 surplus_hugepages=0 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.680 anon_hugepages=0 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39700000 kB' 'MemAvailable: 43633476 kB' 'Buffers: 3736 kB' 'Cached: 16087900 kB' 'SwapCached: 0 kB' 'Active: 12950284 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510512 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554400 kB' 'Mapped: 178568 kB' 'Shmem: 11959292 kB' 'KReclaimable: 445568 kB' 'Slab: 834360 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388792 kB' 'KernelStack: 12752 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.680 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.681 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18554536 kB' 'MemUsed: 14275348 kB' 'SwapCached: 0 kB' 'Active: 7726624 kB' 'Inactive: 3338088 kB' 'Active(anon): 7370876 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860488 kB' 'Mapped: 117812 kB' 'AnonPages: 207372 kB' 'Shmem: 7166652 kB' 'KernelStack: 7624 kB' 'PageTables: 4756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152100 kB' 'Slab: 331216 kB' 'SReclaimable: 152100 kB' 'SUnreclaim: 179116 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.682 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.683 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.942 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:58.943 node0=1024 expecting 1024 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:58.943 00:03:58.943 real 0m2.403s 00:03:58.943 user 0m0.598s 00:03:58.943 sys 0m0.850s 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.943 08:49:36 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:58.943 ************************************ 00:03:58.943 END TEST default_setup 00:03:58.943 ************************************ 00:03:58.943 08:49:36 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:58.943 08:49:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.943 08:49:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.943 08:49:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.943 ************************************ 00:03:58.943 START TEST per_node_1G_alloc 00:03:58.943 ************************************ 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.943 08:49:36 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.882 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.882 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.882 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.882 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.882 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.882 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.882 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.882 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:59.882 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:59.882 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.882 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:59.882 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:59.882 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:59.882 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:59.882 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:59.882 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:59.882 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39695660 kB' 'MemAvailable: 43629136 kB' 'Buffers: 3736 kB' 'Cached: 16087968 kB' 'SwapCached: 0 kB' 'Active: 12950604 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510832 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554716 kB' 'Mapped: 178664 kB' 'Shmem: 11959360 kB' 'KReclaimable: 445568 kB' 'Slab: 834320 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388752 kB' 'KernelStack: 12736 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.148 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.149 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39696440 kB' 'MemAvailable: 43629916 kB' 'Buffers: 3736 kB' 'Cached: 16087968 kB' 'SwapCached: 0 kB' 'Active: 12950520 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510748 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554596 kB' 'Mapped: 178584 kB' 'Shmem: 11959360 kB' 'KReclaimable: 445568 kB' 'Slab: 834304 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388736 kB' 'KernelStack: 12752 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.150 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.151 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39697592 kB' 'MemAvailable: 43631068 kB' 'Buffers: 3736 kB' 'Cached: 16087988 kB' 'SwapCached: 0 kB' 'Active: 12951444 kB' 'Inactive: 3692572 kB' 'Active(anon): 12511672 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 555116 kB' 'Mapped: 178584 kB' 'Shmem: 11959380 kB' 'KReclaimable: 445568 kB' 'Slab: 834380 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388812 kB' 'KernelStack: 12768 kB' 'PageTables: 8112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.152 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.153 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:00.154 nr_hugepages=1024 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:00.154 resv_hugepages=0 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:00.154 surplus_hugepages=0 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:00.154 anon_hugepages=0 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39696840 kB' 'MemAvailable: 43630316 kB' 'Buffers: 3736 kB' 'Cached: 16088008 kB' 'SwapCached: 0 kB' 'Active: 12949960 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510188 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554032 kB' 'Mapped: 178584 kB' 'Shmem: 11959400 kB' 'KReclaimable: 445568 kB' 'Slab: 834380 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388812 kB' 'KernelStack: 12688 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651052 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.154 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.155 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19605168 kB' 'MemUsed: 13224716 kB' 'SwapCached: 0 kB' 'Active: 7726060 kB' 'Inactive: 3338088 kB' 'Active(anon): 7370312 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860520 kB' 'Mapped: 117812 kB' 'AnonPages: 206768 kB' 'Shmem: 7166684 kB' 'KernelStack: 7624 kB' 'PageTables: 4752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152100 kB' 'Slab: 331152 kB' 'SReclaimable: 152100 kB' 'SUnreclaim: 179052 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.156 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.157 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20092392 kB' 'MemUsed: 7619432 kB' 'SwapCached: 0 kB' 'Active: 5224328 kB' 'Inactive: 354484 kB' 'Active(anon): 5140304 kB' 'Inactive(anon): 0 kB' 'Active(file): 84024 kB' 'Inactive(file): 354484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5231272 kB' 'Mapped: 60772 kB' 'AnonPages: 347640 kB' 'Shmem: 4792764 kB' 'KernelStack: 5112 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293468 kB' 'Slab: 503228 kB' 'SReclaimable: 293468 kB' 'SUnreclaim: 209760 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.158 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:00.159 node0=512 expecting 512 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:00.159 node1=512 expecting 512 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:00.159 00:04:00.159 real 0m1.369s 00:04:00.159 user 0m0.567s 00:04:00.159 sys 0m0.762s 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:00.159 08:49:38 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:00.159 ************************************ 00:04:00.159 END TEST per_node_1G_alloc 00:04:00.159 ************************************ 00:04:00.159 08:49:38 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:00.159 08:49:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:00.159 08:49:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:00.159 08:49:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:00.159 ************************************ 00:04:00.159 START TEST even_2G_alloc 00:04:00.159 ************************************ 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:00.159 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:00.417 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:00.417 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.417 08:49:38 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.354 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.354 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.354 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.354 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.354 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.354 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.354 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.354 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.354 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:01.354 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.354 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:01.354 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:01.354 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:01.354 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:01.354 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:01.354 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:01.354 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:01.619 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:01.619 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.619 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.619 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39733660 kB' 'MemAvailable: 43667136 kB' 'Buffers: 3736 kB' 'Cached: 16088112 kB' 'SwapCached: 0 kB' 'Active: 12950540 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510768 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554472 kB' 'Mapped: 178676 kB' 'Shmem: 11959504 kB' 'KReclaimable: 445568 kB' 'Slab: 834100 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388532 kB' 'KernelStack: 12736 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.620 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39733956 kB' 'MemAvailable: 43667432 kB' 'Buffers: 3736 kB' 'Cached: 16088116 kB' 'SwapCached: 0 kB' 'Active: 12950812 kB' 'Inactive: 3692572 kB' 'Active(anon): 12511040 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554740 kB' 'Mapped: 178596 kB' 'Shmem: 11959508 kB' 'KReclaimable: 445568 kB' 'Slab: 834076 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388508 kB' 'KernelStack: 12752 kB' 'PageTables: 8012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.621 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.622 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39732792 kB' 'MemAvailable: 43666268 kB' 'Buffers: 3736 kB' 'Cached: 16088132 kB' 'SwapCached: 0 kB' 'Active: 12950696 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510924 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554656 kB' 'Mapped: 178596 kB' 'Shmem: 11959524 kB' 'KReclaimable: 445568 kB' 'Slab: 834124 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388556 kB' 'KernelStack: 12768 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.623 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.624 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:01.625 nr_hugepages=1024 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.625 resv_hugepages=0 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.625 surplus_hugepages=0 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.625 anon_hugepages=0 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39733600 kB' 'MemAvailable: 43667076 kB' 'Buffers: 3736 kB' 'Cached: 16088156 kB' 'SwapCached: 0 kB' 'Active: 12950732 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510960 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 554656 kB' 'Mapped: 178596 kB' 'Shmem: 11959548 kB' 'KReclaimable: 445568 kB' 'Slab: 834124 kB' 'SReclaimable: 445568 kB' 'SUnreclaim: 388556 kB' 'KernelStack: 12768 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13651848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.625 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.626 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19616860 kB' 'MemUsed: 13213024 kB' 'SwapCached: 0 kB' 'Active: 7729188 kB' 'Inactive: 3338088 kB' 'Active(anon): 7373440 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860532 kB' 'Mapped: 117812 kB' 'AnonPages: 209864 kB' 'Shmem: 7166696 kB' 'KernelStack: 7624 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152100 kB' 'Slab: 331088 kB' 'SReclaimable: 152100 kB' 'SUnreclaim: 178988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.627 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.628 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20116052 kB' 'MemUsed: 7595772 kB' 'SwapCached: 0 kB' 'Active: 5223964 kB' 'Inactive: 354484 kB' 'Active(anon): 5139940 kB' 'Inactive(anon): 0 kB' 'Active(file): 84024 kB' 'Inactive(file): 354484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5231400 kB' 'Mapped: 61404 kB' 'AnonPages: 347148 kB' 'Shmem: 4792892 kB' 'KernelStack: 5128 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293468 kB' 'Slab: 503036 kB' 'SReclaimable: 293468 kB' 'SUnreclaim: 209568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.629 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.630 node0=512 expecting 512 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:01.630 node1=512 expecting 512 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:01.630 00:04:01.630 real 0m1.450s 00:04:01.630 user 0m0.634s 00:04:01.630 sys 0m0.778s 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.630 08:49:39 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.630 ************************************ 00:04:01.630 END TEST even_2G_alloc 00:04:01.630 ************************************ 00:04:01.630 08:49:39 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:01.630 08:49:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.630 08:49:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.630 08:49:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.889 ************************************ 00:04:01.889 START TEST odd_alloc 00:04:01.889 ************************************ 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.889 08:49:39 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:02.831 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.831 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.831 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.831 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.831 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.831 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.831 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.831 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:02.831 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:02.831 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.831 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:02.831 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:02.831 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:02.831 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:02.831 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:02.831 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:02.831 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.095 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39704840 kB' 'MemAvailable: 43638252 kB' 'Buffers: 3736 kB' 'Cached: 16088236 kB' 'SwapCached: 0 kB' 'Active: 12948968 kB' 'Inactive: 3692572 kB' 'Active(anon): 12509196 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553160 kB' 'Mapped: 177576 kB' 'Shmem: 11959628 kB' 'KReclaimable: 445504 kB' 'Slab: 833916 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388412 kB' 'KernelStack: 13184 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 13638896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196996 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.096 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39709896 kB' 'MemAvailable: 43643308 kB' 'Buffers: 3736 kB' 'Cached: 16088240 kB' 'SwapCached: 0 kB' 'Active: 12949116 kB' 'Inactive: 3692572 kB' 'Active(anon): 12509344 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552836 kB' 'Mapped: 177568 kB' 'Shmem: 11959632 kB' 'KReclaimable: 445504 kB' 'Slab: 833916 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388412 kB' 'KernelStack: 13120 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 13638916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.097 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.098 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39709988 kB' 'MemAvailable: 43643400 kB' 'Buffers: 3736 kB' 'Cached: 16088260 kB' 'SwapCached: 0 kB' 'Active: 12948428 kB' 'Inactive: 3692572 kB' 'Active(anon): 12508656 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 552080 kB' 'Mapped: 177568 kB' 'Shmem: 11959652 kB' 'KReclaimable: 445504 kB' 'Slab: 833972 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388468 kB' 'KernelStack: 13040 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 13636576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.099 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.100 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:03.101 nr_hugepages=1025 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.101 resv_hugepages=0 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.101 surplus_hugepages=0 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.101 anon_hugepages=0 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39710300 kB' 'MemAvailable: 43643712 kB' 'Buffers: 3736 kB' 'Cached: 16088280 kB' 'SwapCached: 0 kB' 'Active: 12947392 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507620 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551088 kB' 'Mapped: 177568 kB' 'Shmem: 11959672 kB' 'KReclaimable: 445504 kB' 'Slab: 833972 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388468 kB' 'KernelStack: 12688 kB' 'PageTables: 7376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 13636596 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.101 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.102 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19611060 kB' 'MemUsed: 13218824 kB' 'SwapCached: 0 kB' 'Active: 7723984 kB' 'Inactive: 3338088 kB' 'Active(anon): 7368236 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860532 kB' 'Mapped: 117236 kB' 'AnonPages: 204668 kB' 'Shmem: 7166696 kB' 'KernelStack: 7592 kB' 'PageTables: 4504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152036 kB' 'Slab: 330868 kB' 'SReclaimable: 152036 kB' 'SUnreclaim: 178832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.103 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.104 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.364 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 20099240 kB' 'MemUsed: 7612584 kB' 'SwapCached: 0 kB' 'Active: 5223264 kB' 'Inactive: 354484 kB' 'Active(anon): 5139240 kB' 'Inactive(anon): 0 kB' 'Active(file): 84024 kB' 'Inactive(file): 354484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5231528 kB' 'Mapped: 60332 kB' 'AnonPages: 346308 kB' 'Shmem: 4793020 kB' 'KernelStack: 5112 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293468 kB' 'Slab: 503072 kB' 'SReclaimable: 293468 kB' 'SUnreclaim: 209604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.365 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:03.366 node0=512 expecting 513 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:03.366 node1=513 expecting 512 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:03.366 00:04:03.366 real 0m1.480s 00:04:03.366 user 0m0.599s 00:04:03.366 sys 0m0.845s 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.366 08:49:41 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.366 ************************************ 00:04:03.366 END TEST odd_alloc 00:04:03.366 ************************************ 00:04:03.366 08:49:41 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:03.366 08:49:41 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.366 08:49:41 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.366 08:49:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.366 ************************************ 00:04:03.366 START TEST custom_alloc 00:04:03.366 ************************************ 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.366 08:49:41 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.302 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.302 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.302 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.302 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.302 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.302 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.302 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.302 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.302 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:04.302 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.302 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:04.302 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:04.302 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:04.302 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:04.302 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:04.302 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:04.567 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38652052 kB' 'MemAvailable: 42585464 kB' 'Buffers: 3736 kB' 'Cached: 16088376 kB' 'SwapCached: 0 kB' 'Active: 12947568 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507796 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551288 kB' 'Mapped: 177672 kB' 'Shmem: 11959768 kB' 'KReclaimable: 445504 kB' 'Slab: 834260 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388756 kB' 'KernelStack: 12752 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 13636932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.567 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.568 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38652052 kB' 'MemAvailable: 42585464 kB' 'Buffers: 3736 kB' 'Cached: 16088380 kB' 'SwapCached: 0 kB' 'Active: 12947296 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507524 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550944 kB' 'Mapped: 177584 kB' 'Shmem: 11959772 kB' 'KReclaimable: 445504 kB' 'Slab: 834268 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388764 kB' 'KernelStack: 12720 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 13636952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.569 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38653136 kB' 'MemAvailable: 42586548 kB' 'Buffers: 3736 kB' 'Cached: 16088392 kB' 'SwapCached: 0 kB' 'Active: 12947292 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507520 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550908 kB' 'Mapped: 177584 kB' 'Shmem: 11959784 kB' 'KReclaimable: 445504 kB' 'Slab: 834264 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388760 kB' 'KernelStack: 12688 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 13636972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.570 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.571 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:04.572 nr_hugepages=1536 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.572 resv_hugepages=0 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.572 surplus_hugepages=0 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.572 anon_hugepages=0 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.572 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 38652632 kB' 'MemAvailable: 42586044 kB' 'Buffers: 3736 kB' 'Cached: 16088416 kB' 'SwapCached: 0 kB' 'Active: 12947316 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507544 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 550908 kB' 'Mapped: 177584 kB' 'Shmem: 11959808 kB' 'KReclaimable: 445504 kB' 'Slab: 834264 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388760 kB' 'KernelStack: 12688 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 13636992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.573 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.574 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 19611144 kB' 'MemUsed: 13218740 kB' 'SwapCached: 0 kB' 'Active: 7724628 kB' 'Inactive: 3338088 kB' 'Active(anon): 7368880 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860608 kB' 'Mapped: 117236 kB' 'AnonPages: 205284 kB' 'Shmem: 7166772 kB' 'KernelStack: 7608 kB' 'PageTables: 4556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152036 kB' 'Slab: 331008 kB' 'SReclaimable: 152036 kB' 'SUnreclaim: 178972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.838 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:04.839 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 19041028 kB' 'MemUsed: 8670796 kB' 'SwapCached: 0 kB' 'Active: 5222944 kB' 'Inactive: 354484 kB' 'Active(anon): 5138920 kB' 'Inactive(anon): 0 kB' 'Active(file): 84024 kB' 'Inactive(file): 354484 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 5231576 kB' 'Mapped: 60348 kB' 'AnonPages: 345852 kB' 'Shmem: 4793068 kB' 'KernelStack: 5096 kB' 'PageTables: 3200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 293468 kB' 'Slab: 503256 kB' 'SReclaimable: 293468 kB' 'SUnreclaim: 209788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.840 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:04.841 node0=512 expecting 512 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:04.841 node1=1024 expecting 1024 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:04.841 00:04:04.841 real 0m1.443s 00:04:04.841 user 0m0.625s 00:04:04.841 sys 0m0.780s 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.841 08:49:42 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 ************************************ 00:04:04.841 END TEST custom_alloc 00:04:04.841 ************************************ 00:04:04.841 08:49:42 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:04.841 08:49:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.841 08:49:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.841 08:49:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:04.841 ************************************ 00:04:04.841 START TEST no_shrink_alloc 00:04:04.841 ************************************ 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.841 08:49:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:05.782 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.782 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.782 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.782 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.782 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.782 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.782 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.782 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:05.782 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:05.782 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.782 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:05.782 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:05.782 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:05.782 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:05.782 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:05.782 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:05.782 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.047 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39677800 kB' 'MemAvailable: 43611212 kB' 'Buffers: 3736 kB' 'Cached: 16088504 kB' 'SwapCached: 0 kB' 'Active: 12947800 kB' 'Inactive: 3692572 kB' 'Active(anon): 12508028 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551348 kB' 'Mapped: 177608 kB' 'Shmem: 11959896 kB' 'KReclaimable: 445504 kB' 'Slab: 834320 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388816 kB' 'KernelStack: 12704 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637268 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.048 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39680300 kB' 'MemAvailable: 43613712 kB' 'Buffers: 3736 kB' 'Cached: 16088504 kB' 'SwapCached: 0 kB' 'Active: 12947664 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507892 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551268 kB' 'Mapped: 177600 kB' 'Shmem: 11959896 kB' 'KReclaimable: 445504 kB' 'Slab: 834308 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388804 kB' 'KernelStack: 12736 kB' 'PageTables: 7736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.049 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.050 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39680308 kB' 'MemAvailable: 43613720 kB' 'Buffers: 3736 kB' 'Cached: 16088524 kB' 'SwapCached: 0 kB' 'Active: 12947652 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507880 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551248 kB' 'Mapped: 177600 kB' 'Shmem: 11959916 kB' 'KReclaimable: 445504 kB' 'Slab: 834392 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388888 kB' 'KernelStack: 12736 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.051 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.052 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.053 nr_hugepages=1024 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.053 resv_hugepages=0 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.053 surplus_hugepages=0 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.053 anon_hugepages=0 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39680696 kB' 'MemAvailable: 43614108 kB' 'Buffers: 3736 kB' 'Cached: 16088544 kB' 'SwapCached: 0 kB' 'Active: 12947676 kB' 'Inactive: 3692572 kB' 'Active(anon): 12507904 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551244 kB' 'Mapped: 177600 kB' 'Shmem: 11959936 kB' 'KReclaimable: 445504 kB' 'Slab: 834392 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388888 kB' 'KernelStack: 12736 kB' 'PageTables: 7760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637328 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.053 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.054 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18547912 kB' 'MemUsed: 14281972 kB' 'SwapCached: 0 kB' 'Active: 7724520 kB' 'Inactive: 3338088 kB' 'Active(anon): 7368772 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860724 kB' 'Mapped: 117232 kB' 'AnonPages: 205120 kB' 'Shmem: 7166888 kB' 'KernelStack: 7592 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152036 kB' 'Slab: 331116 kB' 'SReclaimable: 152036 kB' 'SUnreclaim: 179080 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.055 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.056 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.057 node0=1024 expecting 1024 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.057 08:49:44 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.434 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.434 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.434 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.434 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.434 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.434 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.434 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.434 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.434 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:07.434 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.434 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:07.434 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:07.434 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:07.434 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:07.434 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:07.434 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:07.434 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:07.434 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.434 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39673692 kB' 'MemAvailable: 43607104 kB' 'Buffers: 3736 kB' 'Cached: 16088612 kB' 'SwapCached: 0 kB' 'Active: 12948260 kB' 'Inactive: 3692572 kB' 'Active(anon): 12508488 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551596 kB' 'Mapped: 177676 kB' 'Shmem: 11960004 kB' 'KReclaimable: 445504 kB' 'Slab: 834204 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388700 kB' 'KernelStack: 12736 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637348 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.435 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39674060 kB' 'MemAvailable: 43607472 kB' 'Buffers: 3736 kB' 'Cached: 16088616 kB' 'SwapCached: 0 kB' 'Active: 12948540 kB' 'Inactive: 3692572 kB' 'Active(anon): 12508768 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551892 kB' 'Mapped: 177676 kB' 'Shmem: 11960008 kB' 'KReclaimable: 445504 kB' 'Slab: 834188 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388684 kB' 'KernelStack: 12704 kB' 'PageTables: 7644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.436 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.437 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39674816 kB' 'MemAvailable: 43608228 kB' 'Buffers: 3736 kB' 'Cached: 16088636 kB' 'SwapCached: 0 kB' 'Active: 12948004 kB' 'Inactive: 3692572 kB' 'Active(anon): 12508232 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 551328 kB' 'Mapped: 177608 kB' 'Shmem: 11960028 kB' 'KReclaimable: 445504 kB' 'Slab: 834220 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388716 kB' 'KernelStack: 12736 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13637388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.438 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.439 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.440 nr_hugepages=1024 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.440 resv_hugepages=0 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.440 surplus_hugepages=0 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.440 anon_hugepages=0 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 39677292 kB' 'MemAvailable: 43610704 kB' 'Buffers: 3736 kB' 'Cached: 16088656 kB' 'SwapCached: 0 kB' 'Active: 12950336 kB' 'Inactive: 3692572 kB' 'Active(anon): 12510564 kB' 'Inactive(anon): 0 kB' 'Active(file): 439772 kB' 'Inactive(file): 3692572 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553716 kB' 'Mapped: 178040 kB' 'Shmem: 11960048 kB' 'KReclaimable: 445504 kB' 'Slab: 834220 kB' 'SReclaimable: 445504 kB' 'SUnreclaim: 388716 kB' 'KernelStack: 12752 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 13640216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 39936 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1767004 kB' 'DirectMap2M: 17027072 kB' 'DirectMap1G: 50331648 kB' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.440 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.441 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 18537764 kB' 'MemUsed: 14292120 kB' 'SwapCached: 0 kB' 'Active: 7724772 kB' 'Inactive: 3338088 kB' 'Active(anon): 7369024 kB' 'Inactive(anon): 0 kB' 'Active(file): 355748 kB' 'Inactive(file): 3338088 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10860824 kB' 'Mapped: 117232 kB' 'AnonPages: 204748 kB' 'Shmem: 7166988 kB' 'KernelStack: 7528 kB' 'PageTables: 4316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 152036 kB' 'Slab: 331028 kB' 'SReclaimable: 152036 kB' 'SUnreclaim: 178992 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.442 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.443 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.444 node0=1024 expecting 1024 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.444 00:04:07.444 real 0m2.753s 00:04:07.444 user 0m1.130s 00:04:07.444 sys 0m1.542s 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.444 08:49:45 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.444 ************************************ 00:04:07.444 END TEST no_shrink_alloc 00:04:07.444 ************************************ 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.444 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.701 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.701 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.701 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.701 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.701 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.701 08:49:45 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.701 00:04:07.701 real 0m11.284s 00:04:07.701 user 0m4.304s 00:04:07.701 sys 0m5.811s 00:04:07.701 08:49:45 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.701 08:49:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.701 ************************************ 00:04:07.701 END TEST hugepages 00:04:07.701 ************************************ 00:04:07.701 08:49:45 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.701 08:49:45 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.701 08:49:45 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.701 08:49:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.701 ************************************ 00:04:07.701 START TEST driver 00:04:07.701 ************************************ 00:04:07.702 08:49:45 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.702 * Looking for test storage... 00:04:07.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.702 08:49:45 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:07.702 08:49:45 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.702 08:49:45 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.262 08:49:48 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.262 08:49:48 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.262 08:49:48 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.262 08:49:48 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:10.262 ************************************ 00:04:10.262 START TEST guess_driver 00:04:10.262 ************************************ 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:10.262 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:10.262 Looking for driver=vfio-pci 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.262 08:49:48 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.199 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:11.459 08:49:49 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.395 08:49:50 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:12.395 08:49:50 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:12.395 08:49:50 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:12.654 08:49:50 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:12.654 08:49:50 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:12.654 08:49:50 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:12.654 08:49:50 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.191 00:04:15.191 real 0m4.941s 00:04:15.191 user 0m1.101s 00:04:15.191 sys 0m1.841s 00:04:15.191 08:49:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.191 08:49:53 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.191 ************************************ 00:04:15.191 END TEST guess_driver 00:04:15.191 ************************************ 00:04:15.191 00:04:15.191 real 0m7.550s 00:04:15.191 user 0m1.716s 00:04:15.191 sys 0m2.856s 00:04:15.191 08:49:53 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.191 08:49:53 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:15.191 ************************************ 00:04:15.191 END TEST driver 00:04:15.191 ************************************ 00:04:15.191 08:49:53 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:15.191 08:49:53 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.191 08:49:53 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.191 08:49:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.191 ************************************ 00:04:15.191 START TEST devices 00:04:15.191 ************************************ 00:04:15.191 08:49:53 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:15.191 * Looking for test storage... 00:04:15.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.191 08:49:53 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:15.191 08:49:53 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:15.191 08:49:53 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.191 08:49:53 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:0b:00.0 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\b\:\0\0\.\0* ]] 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:17.096 08:49:54 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:17.096 No valid GPT data, bailing 00:04:17.096 08:49:54 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:17.096 08:49:54 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:17.096 08:49:54 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:0b:00.0 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:17.096 08:49:54 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.096 08:49:54 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:17.096 ************************************ 00:04:17.096 START TEST nvme_mount 00:04:17.096 ************************************ 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:17.096 08:49:54 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:18.035 Creating new GPT entries in memory. 00:04:18.035 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:18.035 other utilities. 00:04:18.035 08:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:18.035 08:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.035 08:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:18.035 08:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:18.035 08:49:55 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:18.973 Creating new GPT entries in memory. 00:04:18.973 The operation has completed successfully. 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3624238 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:18.973 08:49:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.914 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.915 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.915 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:19.915 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:19.915 08:49:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.175 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.175 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.434 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:20.434 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:20.434 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.434 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:0b:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.434 08:49:58 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:0b:00.0 data@nvme0n1 '' '' 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:21.813 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.814 08:49:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.746 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.746 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:22.747 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:23.005 08:50:00 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:23.005 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:23.005 00:04:23.005 real 0m6.275s 00:04:23.005 user 0m1.467s 00:04:23.005 sys 0m2.359s 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.005 08:50:01 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:23.005 ************************************ 00:04:23.005 END TEST nvme_mount 00:04:23.005 ************************************ 00:04:23.005 08:50:01 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:23.005 08:50:01 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.005 08:50:01 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.005 08:50:01 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:23.265 ************************************ 00:04:23.265 START TEST dm_mount 00:04:23.265 ************************************ 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.265 08:50:01 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:24.200 Creating new GPT entries in memory. 00:04:24.200 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:24.200 other utilities. 00:04:24.200 08:50:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:24.200 08:50:02 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.200 08:50:02 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:24.200 08:50:02 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:24.200 08:50:02 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:25.138 Creating new GPT entries in memory. 00:04:25.138 The operation has completed successfully. 00:04:25.138 08:50:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:25.138 08:50:03 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:25.138 08:50:03 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:25.138 08:50:03 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:25.138 08:50:03 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:26.077 The operation has completed successfully. 00:04:26.077 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:26.077 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.077 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3626624 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:0b:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:26.336 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.337 08:50:04 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.272 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:0b:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:0b:00.0 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:0b:00.0 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.532 08:50:05 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:0b:00.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\0\b\:\0\0\.\0 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:28.911 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:28.911 00:04:28.911 real 0m5.791s 00:04:28.911 user 0m0.974s 00:04:28.911 sys 0m1.655s 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.911 08:50:06 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:28.911 ************************************ 00:04:28.911 END TEST dm_mount 00:04:28.911 ************************************ 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:28.911 08:50:06 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:29.170 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:29.170 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:29.170 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:29.170 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:29.170 08:50:07 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:29.170 00:04:29.170 real 0m14.032s 00:04:29.170 user 0m3.126s 00:04:29.170 sys 0m5.063s 00:04:29.170 08:50:07 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.170 08:50:07 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:29.170 ************************************ 00:04:29.170 END TEST devices 00:04:29.170 ************************************ 00:04:29.170 00:04:29.170 real 0m43.595s 00:04:29.170 user 0m12.457s 00:04:29.170 sys 0m19.218s 00:04:29.170 08:50:07 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.170 08:50:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.170 ************************************ 00:04:29.170 END TEST setup.sh 00:04:29.170 ************************************ 00:04:29.170 08:50:07 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:30.549 Hugepages 00:04:30.549 node hugesize free / total 00:04:30.549 node0 1048576kB 0 / 0 00:04:30.549 node0 2048kB 2048 / 2048 00:04:30.549 node1 1048576kB 0 / 0 00:04:30.549 node1 2048kB 0 / 0 00:04:30.549 00:04:30.549 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.549 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:30.549 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:30.549 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:30.549 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:30.549 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:30.549 08:50:08 -- spdk/autotest.sh@130 -- # uname -s 00:04:30.549 08:50:08 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:30.549 08:50:08 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:30.549 08:50:08 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:31.551 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:31.551 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:31.811 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.748 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:32.748 08:50:10 -- common/autotest_common.sh@1530 -- # sleep 1 00:04:33.685 08:50:11 -- common/autotest_common.sh@1531 -- # bdfs=() 00:04:33.685 08:50:11 -- common/autotest_common.sh@1531 -- # local bdfs 00:04:33.685 08:50:11 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:04:33.685 08:50:11 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:04:33.685 08:50:11 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:33.685 08:50:11 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:33.685 08:50:11 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:33.685 08:50:11 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:33.685 08:50:11 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:33.943 08:50:11 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:33.943 08:50:11 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:04:33.943 08:50:11 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.877 Waiting for block devices as requested 00:04:35.136 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:35.136 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:35.136 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:35.400 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:35.400 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:35.400 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:35.400 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:35.660 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:35.660 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:04:35.660 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:35.920 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:35.920 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:35.920 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:36.178 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:36.178 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:04:36.178 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:04:36.178 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:04:36.437 08:50:14 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:04:36.437 08:50:14 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1500 -- # grep 0000:0b:00.0/nvme/nvme 00:04:36.437 08:50:14 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:04:36.437 08:50:14 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:04:36.437 08:50:14 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1543 -- # grep oacs 00:04:36.437 08:50:14 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:04:36.437 08:50:14 -- common/autotest_common.sh@1543 -- # oacs=' 0xf' 00:04:36.437 08:50:14 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:04:36.437 08:50:14 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:04:36.437 08:50:14 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:04:36.437 08:50:14 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:04:36.437 08:50:14 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:04:36.437 08:50:14 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:04:36.437 08:50:14 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:04:36.437 08:50:14 -- common/autotest_common.sh@1555 -- # continue 00:04:36.437 08:50:14 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:36.437 08:50:14 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.437 08:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:36.437 08:50:14 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:36.437 08:50:14 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:36.437 08:50:14 -- common/autotest_common.sh@10 -- # set +x 00:04:36.437 08:50:14 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.813 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.813 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:37.813 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:38.751 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:04:38.751 08:50:16 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:38.751 08:50:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:38.751 08:50:16 -- common/autotest_common.sh@10 -- # set +x 00:04:38.751 08:50:16 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:38.751 08:50:16 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:04:38.751 08:50:16 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:04:38.751 08:50:16 -- common/autotest_common.sh@1575 -- # bdfs=() 00:04:38.751 08:50:16 -- common/autotest_common.sh@1575 -- # local bdfs 00:04:38.751 08:50:16 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:04:38.751 08:50:16 -- common/autotest_common.sh@1511 -- # bdfs=() 00:04:38.751 08:50:16 -- common/autotest_common.sh@1511 -- # local bdfs 00:04:38.751 08:50:16 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.751 08:50:16 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:38.751 08:50:16 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:04:38.751 08:50:16 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:04:38.751 08:50:16 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:04:38.751 08:50:16 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:04:38.751 08:50:16 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:04:38.751 08:50:16 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:04:38.751 08:50:16 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:38.751 08:50:16 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:04:38.751 08:50:16 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:0b:00.0 00:04:38.751 08:50:16 -- common/autotest_common.sh@1590 -- # [[ -z 0000:0b:00.0 ]] 00:04:38.751 08:50:16 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=3631811 00:04:38.751 08:50:16 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:38.751 08:50:16 -- common/autotest_common.sh@1596 -- # waitforlisten 3631811 00:04:38.751 08:50:16 -- common/autotest_common.sh@829 -- # '[' -z 3631811 ']' 00:04:38.751 08:50:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.751 08:50:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.751 08:50:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.751 08:50:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.751 08:50:16 -- common/autotest_common.sh@10 -- # set +x 00:04:39.011 [2024-07-24 08:50:16.878928] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:04:39.011 [2024-07-24 08:50:16.879019] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3631811 ] 00:04:39.011 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.011 [2024-07-24 08:50:16.910902] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:39.011 [2024-07-24 08:50:16.942740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.011 [2024-07-24 08:50:17.028934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.269 08:50:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.269 08:50:17 -- common/autotest_common.sh@862 -- # return 0 00:04:39.269 08:50:17 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:04:39.269 08:50:17 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:04:39.269 08:50:17 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:04:42.560 nvme0n1 00:04:42.560 08:50:20 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:42.560 [2024-07-24 08:50:20.591695] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:42.560 [2024-07-24 08:50:20.591746] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:42.560 request: 00:04:42.560 { 00:04:42.560 "nvme_ctrlr_name": "nvme0", 00:04:42.560 "password": "test", 00:04:42.560 "method": "bdev_nvme_opal_revert", 00:04:42.560 "req_id": 1 00:04:42.560 } 00:04:42.560 Got JSON-RPC error response 00:04:42.560 response: 00:04:42.560 { 00:04:42.560 "code": -32603, 00:04:42.560 "message": "Internal error" 00:04:42.560 } 00:04:42.560 08:50:20 -- common/autotest_common.sh@1602 -- # true 00:04:42.560 08:50:20 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:04:42.560 08:50:20 -- common/autotest_common.sh@1606 -- # killprocess 3631811 00:04:42.560 08:50:20 -- common/autotest_common.sh@948 -- # '[' -z 3631811 ']' 00:04:42.560 08:50:20 -- common/autotest_common.sh@952 -- # kill -0 3631811 00:04:42.560 08:50:20 -- common/autotest_common.sh@953 -- # uname 00:04:42.560 08:50:20 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.560 08:50:20 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3631811 00:04:42.560 08:50:20 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.560 08:50:20 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.560 08:50:20 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3631811' 00:04:42.560 killing process with pid 3631811 00:04:42.560 08:50:20 -- common/autotest_common.sh@967 -- # kill 3631811 00:04:42.560 08:50:20 -- common/autotest_common.sh@972 -- # wait 3631811 00:04:44.466 08:50:22 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:44.466 08:50:22 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:44.466 08:50:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:44.466 08:50:22 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:44.466 08:50:22 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:44.466 08:50:22 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.466 08:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.466 08:50:22 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:44.466 08:50:22 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.466 08:50:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.466 08:50:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.466 08:50:22 -- common/autotest_common.sh@10 -- # set +x 00:04:44.466 ************************************ 00:04:44.466 START TEST env 00:04:44.466 ************************************ 00:04:44.466 08:50:22 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:44.466 * Looking for test storage... 00:04:44.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:44.466 08:50:22 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.466 08:50:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.466 08:50:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.466 08:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.466 ************************************ 00:04:44.466 START TEST env_memory 00:04:44.466 ************************************ 00:04:44.466 08:50:22 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:44.466 00:04:44.466 00:04:44.466 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.466 http://cunit.sourceforge.net/ 00:04:44.466 00:04:44.466 00:04:44.466 Suite: memory 00:04:44.466 Test: alloc and free memory map ...[2024-07-24 08:50:22.449969] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:44.466 passed 00:04:44.466 Test: mem map translation ...[2024-07-24 08:50:22.471672] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:44.466 [2024-07-24 08:50:22.471694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:44.466 [2024-07-24 08:50:22.471753] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:44.466 [2024-07-24 08:50:22.471765] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:44.466 passed 00:04:44.466 Test: mem map registration ...[2024-07-24 08:50:22.515383] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:44.466 [2024-07-24 08:50:22.515418] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:44.467 passed 00:04:44.467 Test: mem map adjacent registrations ...passed 00:04:44.467 00:04:44.467 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.467 suites 1 1 n/a 0 0 00:04:44.467 tests 4 4 4 0 0 00:04:44.467 asserts 152 152 152 0 n/a 00:04:44.467 00:04:44.467 Elapsed time = 0.147 seconds 00:04:44.467 00:04:44.467 real 0m0.154s 00:04:44.467 user 0m0.145s 00:04:44.467 sys 0m0.008s 00:04:44.467 08:50:22 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.467 08:50:22 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:44.467 ************************************ 00:04:44.467 END TEST env_memory 00:04:44.467 ************************************ 00:04:44.728 08:50:22 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.728 08:50:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.728 08:50:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.728 08:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.728 ************************************ 00:04:44.728 START TEST env_vtophys 00:04:44.728 ************************************ 00:04:44.728 08:50:22 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:44.728 EAL: lib.eal log level changed from notice to debug 00:04:44.728 EAL: Detected lcore 0 as core 0 on socket 0 00:04:44.728 EAL: Detected lcore 1 as core 1 on socket 0 00:04:44.728 EAL: Detected lcore 2 as core 2 on socket 0 00:04:44.728 EAL: Detected lcore 3 as core 3 on socket 0 00:04:44.728 EAL: Detected lcore 4 as core 4 on socket 0 00:04:44.728 EAL: Detected lcore 5 as core 5 on socket 0 00:04:44.728 EAL: Detected lcore 6 as core 8 on socket 0 00:04:44.728 EAL: Detected lcore 7 as core 9 on socket 0 00:04:44.728 EAL: Detected lcore 8 as core 10 on socket 0 00:04:44.728 EAL: Detected lcore 9 as core 11 on socket 0 00:04:44.728 EAL: Detected lcore 10 as core 12 on socket 0 00:04:44.728 EAL: Detected lcore 11 as core 13 on socket 0 00:04:44.728 EAL: Detected lcore 12 as core 0 on socket 1 00:04:44.728 EAL: Detected lcore 13 as core 1 on socket 1 00:04:44.728 EAL: Detected lcore 14 as core 2 on socket 1 00:04:44.728 EAL: Detected lcore 15 as core 3 on socket 1 00:04:44.728 EAL: Detected lcore 16 as core 4 on socket 1 00:04:44.728 EAL: Detected lcore 17 as core 5 on socket 1 00:04:44.728 EAL: Detected lcore 18 as core 8 on socket 1 00:04:44.728 EAL: Detected lcore 19 as core 9 on socket 1 00:04:44.728 EAL: Detected lcore 20 as core 10 on socket 1 00:04:44.728 EAL: Detected lcore 21 as core 11 on socket 1 00:04:44.728 EAL: Detected lcore 22 as core 12 on socket 1 00:04:44.728 EAL: Detected lcore 23 as core 13 on socket 1 00:04:44.728 EAL: Detected lcore 24 as core 0 on socket 0 00:04:44.728 EAL: Detected lcore 25 as core 1 on socket 0 00:04:44.728 EAL: Detected lcore 26 as core 2 on socket 0 00:04:44.728 EAL: Detected lcore 27 as core 3 on socket 0 00:04:44.728 EAL: Detected lcore 28 as core 4 on socket 0 00:04:44.728 EAL: Detected lcore 29 as core 5 on socket 0 00:04:44.728 EAL: Detected lcore 30 as core 8 on socket 0 00:04:44.728 EAL: Detected lcore 31 as core 9 on socket 0 00:04:44.728 EAL: Detected lcore 32 as core 10 on socket 0 00:04:44.728 EAL: Detected lcore 33 as core 11 on socket 0 00:04:44.728 EAL: Detected lcore 34 as core 12 on socket 0 00:04:44.728 EAL: Detected lcore 35 as core 13 on socket 0 00:04:44.728 EAL: Detected lcore 36 as core 0 on socket 1 00:04:44.728 EAL: Detected lcore 37 as core 1 on socket 1 00:04:44.728 EAL: Detected lcore 38 as core 2 on socket 1 00:04:44.728 EAL: Detected lcore 39 as core 3 on socket 1 00:04:44.728 EAL: Detected lcore 40 as core 4 on socket 1 00:04:44.728 EAL: Detected lcore 41 as core 5 on socket 1 00:04:44.728 EAL: Detected lcore 42 as core 8 on socket 1 00:04:44.728 EAL: Detected lcore 43 as core 9 on socket 1 00:04:44.728 EAL: Detected lcore 44 as core 10 on socket 1 00:04:44.728 EAL: Detected lcore 45 as core 11 on socket 1 00:04:44.728 EAL: Detected lcore 46 as core 12 on socket 1 00:04:44.728 EAL: Detected lcore 47 as core 13 on socket 1 00:04:44.728 EAL: Maximum logical cores by configuration: 128 00:04:44.728 EAL: Detected CPU lcores: 48 00:04:44.728 EAL: Detected NUMA nodes: 2 00:04:44.728 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:04:44.728 EAL: Detected shared linkage of DPDK 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:04:44.728 EAL: Registered [vdev] bus. 00:04:44.728 EAL: bus.vdev log level changed from disabled to notice 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:04:44.728 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:44.728 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:04:44.728 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:04:44.728 EAL: No shared files mode enabled, IPC will be disabled 00:04:44.728 EAL: No shared files mode enabled, IPC is disabled 00:04:44.728 EAL: Bus pci wants IOVA as 'DC' 00:04:44.728 EAL: Bus vdev wants IOVA as 'DC' 00:04:44.728 EAL: Buses did not request a specific IOVA mode. 00:04:44.728 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:44.728 EAL: Selected IOVA mode 'VA' 00:04:44.728 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.728 EAL: Probing VFIO support... 00:04:44.728 EAL: IOMMU type 1 (Type 1) is supported 00:04:44.728 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:44.728 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:44.728 EAL: VFIO support initialized 00:04:44.728 EAL: Ask a virtual area of 0x2e000 bytes 00:04:44.728 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:44.728 EAL: Setting up physically contiguous memory... 00:04:44.729 EAL: Setting maximum number of open files to 524288 00:04:44.729 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:44.729 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:44.729 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:44.729 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:44.729 EAL: Ask a virtual area of 0x61000 bytes 00:04:44.729 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:44.729 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:44.729 EAL: Ask a virtual area of 0x400000000 bytes 00:04:44.729 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:44.729 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:44.729 EAL: Hugepages will be freed exactly as allocated. 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: TSC frequency is ~2700000 KHz 00:04:44.729 EAL: Main lcore 0 is ready (tid=7f82d913ca00;cpuset=[0]) 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 0 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 2MB 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Mem event callback 'spdk:(nil)' registered 00:04:44.729 00:04:44.729 00:04:44.729 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.729 http://cunit.sourceforge.net/ 00:04:44.729 00:04:44.729 00:04:44.729 Suite: components_suite 00:04:44.729 Test: vtophys_malloc_test ...passed 00:04:44.729 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 4MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was shrunk by 4MB 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 6MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was shrunk by 6MB 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 10MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was shrunk by 10MB 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 18MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was shrunk by 18MB 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 34MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was shrunk by 34MB 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 66MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was shrunk by 66MB 00:04:44.729 EAL: Trying to obtain current memory policy. 00:04:44.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.729 EAL: Restoring previous memory policy: 4 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.729 EAL: request: mp_malloc_sync 00:04:44.729 EAL: No shared files mode enabled, IPC is disabled 00:04:44.729 EAL: Heap on socket 0 was expanded by 130MB 00:04:44.729 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.989 EAL: request: mp_malloc_sync 00:04:44.989 EAL: No shared files mode enabled, IPC is disabled 00:04:44.989 EAL: Heap on socket 0 was shrunk by 130MB 00:04:44.989 EAL: Trying to obtain current memory policy. 00:04:44.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:44.989 EAL: Restoring previous memory policy: 4 00:04:44.989 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.989 EAL: request: mp_malloc_sync 00:04:44.989 EAL: No shared files mode enabled, IPC is disabled 00:04:44.989 EAL: Heap on socket 0 was expanded by 258MB 00:04:44.989 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.989 EAL: request: mp_malloc_sync 00:04:44.989 EAL: No shared files mode enabled, IPC is disabled 00:04:44.989 EAL: Heap on socket 0 was shrunk by 258MB 00:04:44.989 EAL: Trying to obtain current memory policy. 00:04:44.989 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.249 EAL: Restoring previous memory policy: 4 00:04:45.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.249 EAL: request: mp_malloc_sync 00:04:45.249 EAL: No shared files mode enabled, IPC is disabled 00:04:45.249 EAL: Heap on socket 0 was expanded by 514MB 00:04:45.249 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.508 EAL: request: mp_malloc_sync 00:04:45.508 EAL: No shared files mode enabled, IPC is disabled 00:04:45.508 EAL: Heap on socket 0 was shrunk by 514MB 00:04:45.508 EAL: Trying to obtain current memory policy. 00:04:45.508 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:45.767 EAL: Restoring previous memory policy: 4 00:04:45.767 EAL: Calling mem event callback 'spdk:(nil)' 00:04:45.767 EAL: request: mp_malloc_sync 00:04:45.767 EAL: No shared files mode enabled, IPC is disabled 00:04:45.767 EAL: Heap on socket 0 was expanded by 1026MB 00:04:46.025 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.025 EAL: request: mp_malloc_sync 00:04:46.025 EAL: No shared files mode enabled, IPC is disabled 00:04:46.025 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:46.025 passed 00:04:46.025 00:04:46.025 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.025 suites 1 1 n/a 0 0 00:04:46.025 tests 2 2 2 0 0 00:04:46.025 asserts 497 497 497 0 n/a 00:04:46.025 00:04:46.025 Elapsed time = 1.386 seconds 00:04:46.025 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.025 EAL: request: mp_malloc_sync 00:04:46.025 EAL: No shared files mode enabled, IPC is disabled 00:04:46.025 EAL: Heap on socket 0 was shrunk by 2MB 00:04:46.025 EAL: No shared files mode enabled, IPC is disabled 00:04:46.025 EAL: No shared files mode enabled, IPC is disabled 00:04:46.025 EAL: No shared files mode enabled, IPC is disabled 00:04:46.025 00:04:46.025 real 0m1.506s 00:04:46.025 user 0m0.858s 00:04:46.025 sys 0m0.613s 00:04:46.025 08:50:24 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.025 08:50:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:46.025 ************************************ 00:04:46.025 END TEST env_vtophys 00:04:46.025 ************************************ 00:04:46.285 08:50:24 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:46.285 08:50:24 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.285 08:50:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.285 08:50:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.285 ************************************ 00:04:46.285 START TEST env_pci 00:04:46.285 ************************************ 00:04:46.285 08:50:24 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:46.285 00:04:46.285 00:04:46.285 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.285 http://cunit.sourceforge.net/ 00:04:46.285 00:04:46.285 00:04:46.285 Suite: pci 00:04:46.285 Test: pci_hook ...[2024-07-24 08:50:24.189343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3632699 has claimed it 00:04:46.285 EAL: Cannot find device (10000:00:01.0) 00:04:46.285 EAL: Failed to attach device on primary process 00:04:46.285 passed 00:04:46.285 00:04:46.285 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.285 suites 1 1 n/a 0 0 00:04:46.285 tests 1 1 1 0 0 00:04:46.285 asserts 25 25 25 0 n/a 00:04:46.285 00:04:46.285 Elapsed time = 0.021 seconds 00:04:46.285 00:04:46.285 real 0m0.033s 00:04:46.285 user 0m0.010s 00:04:46.285 sys 0m0.022s 00:04:46.285 08:50:24 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.285 08:50:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:46.285 ************************************ 00:04:46.285 END TEST env_pci 00:04:46.285 ************************************ 00:04:46.285 08:50:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:46.285 08:50:24 env -- env/env.sh@15 -- # uname 00:04:46.285 08:50:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:46.285 08:50:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:46.285 08:50:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.285 08:50:24 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:46.285 08:50:24 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.285 08:50:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.285 ************************************ 00:04:46.285 START TEST env_dpdk_post_init 00:04:46.285 ************************************ 00:04:46.285 08:50:24 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:46.285 EAL: Detected CPU lcores: 48 00:04:46.285 EAL: Detected NUMA nodes: 2 00:04:46.285 EAL: Detected shared linkage of DPDK 00:04:46.285 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:46.285 EAL: Selected IOVA mode 'VA' 00:04:46.285 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.285 EAL: VFIO support initialized 00:04:46.285 EAL: Using IOMMU type 1 (Type 1) 00:04:51.563 Starting DPDK initialization... 00:04:51.563 Starting SPDK post initialization... 00:04:51.563 SPDK NVMe probe 00:04:51.563 Attaching to 0000:0b:00.0 00:04:51.563 Attached to 0000:0b:00.0 00:04:51.563 Cleaning up... 00:04:51.563 00:04:51.563 real 0m4.368s 00:04:51.563 user 0m3.239s 00:04:51.563 sys 0m0.186s 00:04:51.563 08:50:28 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.563 08:50:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 ************************************ 00:04:51.563 END TEST env_dpdk_post_init 00:04:51.563 ************************************ 00:04:51.563 08:50:28 env -- env/env.sh@26 -- # uname 00:04:51.563 08:50:28 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.563 08:50:28 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.563 08:50:28 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.563 08:50:28 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.563 08:50:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 ************************************ 00:04:51.563 START TEST env_mem_callbacks 00:04:51.563 ************************************ 00:04:51.563 08:50:28 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.563 EAL: Detected CPU lcores: 48 00:04:51.563 EAL: Detected NUMA nodes: 2 00:04:51.563 EAL: Detected shared linkage of DPDK 00:04:51.563 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.563 EAL: Selected IOVA mode 'VA' 00:04:51.563 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.563 EAL: VFIO support initialized 00:04:51.563 00:04:51.563 00:04:51.563 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.563 http://cunit.sourceforge.net/ 00:04:51.563 00:04:51.563 00:04:51.563 Suite: memory 00:04:51.563 Test: test ... 00:04:51.563 register 0x200000200000 2097152 00:04:51.563 malloc 3145728 00:04:51.563 register 0x200000400000 4194304 00:04:51.563 buf 0x200000500000 len 3145728 PASSED 00:04:51.563 malloc 64 00:04:51.563 buf 0x2000004fff40 len 64 PASSED 00:04:51.563 malloc 4194304 00:04:51.563 register 0x200000800000 6291456 00:04:51.563 buf 0x200000a00000 len 4194304 PASSED 00:04:51.563 free 0x200000500000 3145728 00:04:51.563 free 0x2000004fff40 64 00:04:51.563 unregister 0x200000400000 4194304 PASSED 00:04:51.563 free 0x200000a00000 4194304 00:04:51.563 unregister 0x200000800000 6291456 PASSED 00:04:51.563 malloc 8388608 00:04:51.563 register 0x200000400000 10485760 00:04:51.563 buf 0x200000600000 len 8388608 PASSED 00:04:51.563 free 0x200000600000 8388608 00:04:51.563 unregister 0x200000400000 10485760 PASSED 00:04:51.563 passed 00:04:51.563 00:04:51.563 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.563 suites 1 1 n/a 0 0 00:04:51.563 tests 1 1 1 0 0 00:04:51.563 asserts 15 15 15 0 n/a 00:04:51.563 00:04:51.563 Elapsed time = 0.005 seconds 00:04:51.563 00:04:51.563 real 0m0.047s 00:04:51.563 user 0m0.011s 00:04:51.563 sys 0m0.036s 00:04:51.563 08:50:28 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.563 08:50:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 ************************************ 00:04:51.563 END TEST env_mem_callbacks 00:04:51.563 ************************************ 00:04:51.563 00:04:51.563 real 0m6.392s 00:04:51.563 user 0m4.368s 00:04:51.563 sys 0m1.062s 00:04:51.563 08:50:28 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.563 08:50:28 env -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 ************************************ 00:04:51.563 END TEST env 00:04:51.563 ************************************ 00:04:51.563 08:50:28 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.563 08:50:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.563 08:50:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.563 08:50:28 -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 ************************************ 00:04:51.563 START TEST rpc 00:04:51.563 ************************************ 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:51.563 * Looking for test storage... 00:04:51.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.563 08:50:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3633348 00:04:51.563 08:50:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:51.563 08:50:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.563 08:50:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3633348 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@829 -- # '[' -z 3633348 ']' 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.563 08:50:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 [2024-07-24 08:50:28.897142] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:04:51.563 [2024-07-24 08:50:28.897234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633348 ] 00:04:51.563 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.563 [2024-07-24 08:50:28.927749] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:51.563 [2024-07-24 08:50:28.954697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.563 [2024-07-24 08:50:29.041272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:51.563 [2024-07-24 08:50:29.041323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3633348' to capture a snapshot of events at runtime. 00:04:51.563 [2024-07-24 08:50:29.041352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:51.563 [2024-07-24 08:50:29.041364] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:51.563 [2024-07-24 08:50:29.041374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3633348 for offline analysis/debug. 00:04:51.563 [2024-07-24 08:50:29.041425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.563 08:50:29 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:51.563 08:50:29 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:51.563 08:50:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.563 08:50:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:51.563 08:50:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:51.563 08:50:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:51.563 08:50:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.563 08:50:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.563 08:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 ************************************ 00:04:51.563 START TEST rpc_integrity 00:04:51.563 ************************************ 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.563 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.563 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:51.563 { 00:04:51.563 "name": "Malloc0", 00:04:51.563 "aliases": [ 00:04:51.563 "74cd0286-33e2-44c9-a091-131a3cb0787f" 00:04:51.563 ], 00:04:51.563 "product_name": "Malloc disk", 00:04:51.563 "block_size": 512, 00:04:51.563 "num_blocks": 16384, 00:04:51.563 "uuid": "74cd0286-33e2-44c9-a091-131a3cb0787f", 00:04:51.563 "assigned_rate_limits": { 00:04:51.563 "rw_ios_per_sec": 0, 00:04:51.563 "rw_mbytes_per_sec": 0, 00:04:51.563 "r_mbytes_per_sec": 0, 00:04:51.564 "w_mbytes_per_sec": 0 00:04:51.564 }, 00:04:51.564 "claimed": false, 00:04:51.564 "zoned": false, 00:04:51.564 "supported_io_types": { 00:04:51.564 "read": true, 00:04:51.564 "write": true, 00:04:51.564 "unmap": true, 00:04:51.564 "flush": true, 00:04:51.564 "reset": true, 00:04:51.564 "nvme_admin": false, 00:04:51.564 "nvme_io": false, 00:04:51.564 "nvme_io_md": false, 00:04:51.564 "write_zeroes": true, 00:04:51.564 "zcopy": true, 00:04:51.564 "get_zone_info": false, 00:04:51.564 "zone_management": false, 00:04:51.564 "zone_append": false, 00:04:51.564 "compare": false, 00:04:51.564 "compare_and_write": false, 00:04:51.564 "abort": true, 00:04:51.564 "seek_hole": false, 00:04:51.564 "seek_data": false, 00:04:51.564 "copy": true, 00:04:51.564 "nvme_iov_md": false 00:04:51.564 }, 00:04:51.564 "memory_domains": [ 00:04:51.564 { 00:04:51.564 "dma_device_id": "system", 00:04:51.564 "dma_device_type": 1 00:04:51.564 }, 00:04:51.564 { 00:04:51.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.564 "dma_device_type": 2 00:04:51.564 } 00:04:51.564 ], 00:04:51.564 "driver_specific": {} 00:04:51.564 } 00:04:51.564 ]' 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 [2024-07-24 08:50:29.433926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:51.564 [2024-07-24 08:50:29.433974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:51.564 [2024-07-24 08:50:29.433997] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a197f0 00:04:51.564 [2024-07-24 08:50:29.434013] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:51.564 [2024-07-24 08:50:29.435544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:51.564 [2024-07-24 08:50:29.435572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:51.564 Passthru0 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:51.564 { 00:04:51.564 "name": "Malloc0", 00:04:51.564 "aliases": [ 00:04:51.564 "74cd0286-33e2-44c9-a091-131a3cb0787f" 00:04:51.564 ], 00:04:51.564 "product_name": "Malloc disk", 00:04:51.564 "block_size": 512, 00:04:51.564 "num_blocks": 16384, 00:04:51.564 "uuid": "74cd0286-33e2-44c9-a091-131a3cb0787f", 00:04:51.564 "assigned_rate_limits": { 00:04:51.564 "rw_ios_per_sec": 0, 00:04:51.564 "rw_mbytes_per_sec": 0, 00:04:51.564 "r_mbytes_per_sec": 0, 00:04:51.564 "w_mbytes_per_sec": 0 00:04:51.564 }, 00:04:51.564 "claimed": true, 00:04:51.564 "claim_type": "exclusive_write", 00:04:51.564 "zoned": false, 00:04:51.564 "supported_io_types": { 00:04:51.564 "read": true, 00:04:51.564 "write": true, 00:04:51.564 "unmap": true, 00:04:51.564 "flush": true, 00:04:51.564 "reset": true, 00:04:51.564 "nvme_admin": false, 00:04:51.564 "nvme_io": false, 00:04:51.564 "nvme_io_md": false, 00:04:51.564 "write_zeroes": true, 00:04:51.564 "zcopy": true, 00:04:51.564 "get_zone_info": false, 00:04:51.564 "zone_management": false, 00:04:51.564 "zone_append": false, 00:04:51.564 "compare": false, 00:04:51.564 "compare_and_write": false, 00:04:51.564 "abort": true, 00:04:51.564 "seek_hole": false, 00:04:51.564 "seek_data": false, 00:04:51.564 "copy": true, 00:04:51.564 "nvme_iov_md": false 00:04:51.564 }, 00:04:51.564 "memory_domains": [ 00:04:51.564 { 00:04:51.564 "dma_device_id": "system", 00:04:51.564 "dma_device_type": 1 00:04:51.564 }, 00:04:51.564 { 00:04:51.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.564 "dma_device_type": 2 00:04:51.564 } 00:04:51.564 ], 00:04:51.564 "driver_specific": {} 00:04:51.564 }, 00:04:51.564 { 00:04:51.564 "name": "Passthru0", 00:04:51.564 "aliases": [ 00:04:51.564 "4fad2247-46d2-56d7-8efe-7ed8c5b4284f" 00:04:51.564 ], 00:04:51.564 "product_name": "passthru", 00:04:51.564 "block_size": 512, 00:04:51.564 "num_blocks": 16384, 00:04:51.564 "uuid": "4fad2247-46d2-56d7-8efe-7ed8c5b4284f", 00:04:51.564 "assigned_rate_limits": { 00:04:51.564 "rw_ios_per_sec": 0, 00:04:51.564 "rw_mbytes_per_sec": 0, 00:04:51.564 "r_mbytes_per_sec": 0, 00:04:51.564 "w_mbytes_per_sec": 0 00:04:51.564 }, 00:04:51.564 "claimed": false, 00:04:51.564 "zoned": false, 00:04:51.564 "supported_io_types": { 00:04:51.564 "read": true, 00:04:51.564 "write": true, 00:04:51.564 "unmap": true, 00:04:51.564 "flush": true, 00:04:51.564 "reset": true, 00:04:51.564 "nvme_admin": false, 00:04:51.564 "nvme_io": false, 00:04:51.564 "nvme_io_md": false, 00:04:51.564 "write_zeroes": true, 00:04:51.564 "zcopy": true, 00:04:51.564 "get_zone_info": false, 00:04:51.564 "zone_management": false, 00:04:51.564 "zone_append": false, 00:04:51.564 "compare": false, 00:04:51.564 "compare_and_write": false, 00:04:51.564 "abort": true, 00:04:51.564 "seek_hole": false, 00:04:51.564 "seek_data": false, 00:04:51.564 "copy": true, 00:04:51.564 "nvme_iov_md": false 00:04:51.564 }, 00:04:51.564 "memory_domains": [ 00:04:51.564 { 00:04:51.564 "dma_device_id": "system", 00:04:51.564 "dma_device_type": 1 00:04:51.564 }, 00:04:51.564 { 00:04:51.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.564 "dma_device_type": 2 00:04:51.564 } 00:04:51.564 ], 00:04:51.564 "driver_specific": { 00:04:51.564 "passthru": { 00:04:51.564 "name": "Passthru0", 00:04:51.564 "base_bdev_name": "Malloc0" 00:04:51.564 } 00:04:51.564 } 00:04:51.564 } 00:04:51.564 ]' 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:51.564 08:50:29 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:51.564 00:04:51.564 real 0m0.235s 00:04:51.564 user 0m0.156s 00:04:51.564 sys 0m0.020s 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 ************************************ 00:04:51.564 END TEST rpc_integrity 00:04:51.564 ************************************ 00:04:51.564 08:50:29 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:51.564 08:50:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.564 08:50:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.564 08:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 ************************************ 00:04:51.564 START TEST rpc_plugins 00:04:51.564 ************************************ 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:51.564 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:51.564 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.564 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.564 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:51.564 { 00:04:51.564 "name": "Malloc1", 00:04:51.564 "aliases": [ 00:04:51.564 "eac2132c-5ff4-4da5-b59d-7e16ddbfc28a" 00:04:51.564 ], 00:04:51.565 "product_name": "Malloc disk", 00:04:51.565 "block_size": 4096, 00:04:51.565 "num_blocks": 256, 00:04:51.565 "uuid": "eac2132c-5ff4-4da5-b59d-7e16ddbfc28a", 00:04:51.565 "assigned_rate_limits": { 00:04:51.565 "rw_ios_per_sec": 0, 00:04:51.565 "rw_mbytes_per_sec": 0, 00:04:51.565 "r_mbytes_per_sec": 0, 00:04:51.565 "w_mbytes_per_sec": 0 00:04:51.565 }, 00:04:51.565 "claimed": false, 00:04:51.565 "zoned": false, 00:04:51.565 "supported_io_types": { 00:04:51.565 "read": true, 00:04:51.565 "write": true, 00:04:51.565 "unmap": true, 00:04:51.565 "flush": true, 00:04:51.565 "reset": true, 00:04:51.565 "nvme_admin": false, 00:04:51.565 "nvme_io": false, 00:04:51.565 "nvme_io_md": false, 00:04:51.565 "write_zeroes": true, 00:04:51.565 "zcopy": true, 00:04:51.565 "get_zone_info": false, 00:04:51.565 "zone_management": false, 00:04:51.565 "zone_append": false, 00:04:51.565 "compare": false, 00:04:51.565 "compare_and_write": false, 00:04:51.565 "abort": true, 00:04:51.565 "seek_hole": false, 00:04:51.565 "seek_data": false, 00:04:51.565 "copy": true, 00:04:51.565 "nvme_iov_md": false 00:04:51.565 }, 00:04:51.565 "memory_domains": [ 00:04:51.565 { 00:04:51.565 "dma_device_id": "system", 00:04:51.565 "dma_device_type": 1 00:04:51.565 }, 00:04:51.565 { 00:04:51.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:51.565 "dma_device_type": 2 00:04:51.565 } 00:04:51.565 ], 00:04:51.565 "driver_specific": {} 00:04:51.565 } 00:04:51.565 ]' 00:04:51.565 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:51.565 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:51.565 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:51.565 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.565 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.565 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.565 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:51.565 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.565 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.825 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:51.825 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:51.825 08:50:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:51.825 00:04:51.825 real 0m0.116s 00:04:51.825 user 0m0.077s 00:04:51.825 sys 0m0.010s 00:04:51.825 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.825 08:50:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 ************************************ 00:04:51.825 END TEST rpc_plugins 00:04:51.825 ************************************ 00:04:51.825 08:50:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:51.825 08:50:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:51.825 08:50:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.825 08:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 ************************************ 00:04:51.825 START TEST rpc_trace_cmd_test 00:04:51.825 ************************************ 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:51.825 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3633348", 00:04:51.825 "tpoint_group_mask": "0x8", 00:04:51.825 "iscsi_conn": { 00:04:51.825 "mask": "0x2", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "scsi": { 00:04:51.825 "mask": "0x4", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "bdev": { 00:04:51.825 "mask": "0x8", 00:04:51.825 "tpoint_mask": "0xffffffffffffffff" 00:04:51.825 }, 00:04:51.825 "nvmf_rdma": { 00:04:51.825 "mask": "0x10", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "nvmf_tcp": { 00:04:51.825 "mask": "0x20", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "ftl": { 00:04:51.825 "mask": "0x40", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "blobfs": { 00:04:51.825 "mask": "0x80", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "dsa": { 00:04:51.825 "mask": "0x200", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "thread": { 00:04:51.825 "mask": "0x400", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "nvme_pcie": { 00:04:51.825 "mask": "0x800", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "iaa": { 00:04:51.825 "mask": "0x1000", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "nvme_tcp": { 00:04:51.825 "mask": "0x2000", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "bdev_nvme": { 00:04:51.825 "mask": "0x4000", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 }, 00:04:51.825 "sock": { 00:04:51.825 "mask": "0x8000", 00:04:51.825 "tpoint_mask": "0x0" 00:04:51.825 } 00:04:51.825 }' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:51.825 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:52.084 08:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:52.084 00:04:52.084 real 0m0.198s 00:04:52.084 user 0m0.174s 00:04:52.084 sys 0m0.016s 00:04:52.084 08:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.084 08:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:52.084 ************************************ 00:04:52.084 END TEST rpc_trace_cmd_test 00:04:52.084 ************************************ 00:04:52.084 08:50:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:52.084 08:50:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:52.084 08:50:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:52.084 08:50:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.084 08:50:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.084 08:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.084 ************************************ 00:04:52.084 START TEST rpc_daemon_integrity 00:04:52.084 ************************************ 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.084 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.085 { 00:04:52.085 "name": "Malloc2", 00:04:52.085 "aliases": [ 00:04:52.085 "8ab5f356-39a7-4e60-a2a0-9ffaf5a1c821" 00:04:52.085 ], 00:04:52.085 "product_name": "Malloc disk", 00:04:52.085 "block_size": 512, 00:04:52.085 "num_blocks": 16384, 00:04:52.085 "uuid": "8ab5f356-39a7-4e60-a2a0-9ffaf5a1c821", 00:04:52.085 "assigned_rate_limits": { 00:04:52.085 "rw_ios_per_sec": 0, 00:04:52.085 "rw_mbytes_per_sec": 0, 00:04:52.085 "r_mbytes_per_sec": 0, 00:04:52.085 "w_mbytes_per_sec": 0 00:04:52.085 }, 00:04:52.085 "claimed": false, 00:04:52.085 "zoned": false, 00:04:52.085 "supported_io_types": { 00:04:52.085 "read": true, 00:04:52.085 "write": true, 00:04:52.085 "unmap": true, 00:04:52.085 "flush": true, 00:04:52.085 "reset": true, 00:04:52.085 "nvme_admin": false, 00:04:52.085 "nvme_io": false, 00:04:52.085 "nvme_io_md": false, 00:04:52.085 "write_zeroes": true, 00:04:52.085 "zcopy": true, 00:04:52.085 "get_zone_info": false, 00:04:52.085 "zone_management": false, 00:04:52.085 "zone_append": false, 00:04:52.085 "compare": false, 00:04:52.085 "compare_and_write": false, 00:04:52.085 "abort": true, 00:04:52.085 "seek_hole": false, 00:04:52.085 "seek_data": false, 00:04:52.085 "copy": true, 00:04:52.085 "nvme_iov_md": false 00:04:52.085 }, 00:04:52.085 "memory_domains": [ 00:04:52.085 { 00:04:52.085 "dma_device_id": "system", 00:04:52.085 "dma_device_type": 1 00:04:52.085 }, 00:04:52.085 { 00:04:52.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.085 "dma_device_type": 2 00:04:52.085 } 00:04:52.085 ], 00:04:52.085 "driver_specific": {} 00:04:52.085 } 00:04:52.085 ]' 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.085 [2024-07-24 08:50:30.116724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:52.085 [2024-07-24 08:50:30.116773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.085 [2024-07-24 08:50:30.116797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bbd490 00:04:52.085 [2024-07-24 08:50:30.116813] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.085 [2024-07-24 08:50:30.118162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.085 [2024-07-24 08:50:30.118187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.085 Passthru0 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.085 { 00:04:52.085 "name": "Malloc2", 00:04:52.085 "aliases": [ 00:04:52.085 "8ab5f356-39a7-4e60-a2a0-9ffaf5a1c821" 00:04:52.085 ], 00:04:52.085 "product_name": "Malloc disk", 00:04:52.085 "block_size": 512, 00:04:52.085 "num_blocks": 16384, 00:04:52.085 "uuid": "8ab5f356-39a7-4e60-a2a0-9ffaf5a1c821", 00:04:52.085 "assigned_rate_limits": { 00:04:52.085 "rw_ios_per_sec": 0, 00:04:52.085 "rw_mbytes_per_sec": 0, 00:04:52.085 "r_mbytes_per_sec": 0, 00:04:52.085 "w_mbytes_per_sec": 0 00:04:52.085 }, 00:04:52.085 "claimed": true, 00:04:52.085 "claim_type": "exclusive_write", 00:04:52.085 "zoned": false, 00:04:52.085 "supported_io_types": { 00:04:52.085 "read": true, 00:04:52.085 "write": true, 00:04:52.085 "unmap": true, 00:04:52.085 "flush": true, 00:04:52.085 "reset": true, 00:04:52.085 "nvme_admin": false, 00:04:52.085 "nvme_io": false, 00:04:52.085 "nvme_io_md": false, 00:04:52.085 "write_zeroes": true, 00:04:52.085 "zcopy": true, 00:04:52.085 "get_zone_info": false, 00:04:52.085 "zone_management": false, 00:04:52.085 "zone_append": false, 00:04:52.085 "compare": false, 00:04:52.085 "compare_and_write": false, 00:04:52.085 "abort": true, 00:04:52.085 "seek_hole": false, 00:04:52.085 "seek_data": false, 00:04:52.085 "copy": true, 00:04:52.085 "nvme_iov_md": false 00:04:52.085 }, 00:04:52.085 "memory_domains": [ 00:04:52.085 { 00:04:52.085 "dma_device_id": "system", 00:04:52.085 "dma_device_type": 1 00:04:52.085 }, 00:04:52.085 { 00:04:52.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.085 "dma_device_type": 2 00:04:52.085 } 00:04:52.085 ], 00:04:52.085 "driver_specific": {} 00:04:52.085 }, 00:04:52.085 { 00:04:52.085 "name": "Passthru0", 00:04:52.085 "aliases": [ 00:04:52.085 "7e527769-1c2d-574d-8f3c-e0dd4208465a" 00:04:52.085 ], 00:04:52.085 "product_name": "passthru", 00:04:52.085 "block_size": 512, 00:04:52.085 "num_blocks": 16384, 00:04:52.085 "uuid": "7e527769-1c2d-574d-8f3c-e0dd4208465a", 00:04:52.085 "assigned_rate_limits": { 00:04:52.085 "rw_ios_per_sec": 0, 00:04:52.085 "rw_mbytes_per_sec": 0, 00:04:52.085 "r_mbytes_per_sec": 0, 00:04:52.085 "w_mbytes_per_sec": 0 00:04:52.085 }, 00:04:52.085 "claimed": false, 00:04:52.085 "zoned": false, 00:04:52.085 "supported_io_types": { 00:04:52.085 "read": true, 00:04:52.085 "write": true, 00:04:52.085 "unmap": true, 00:04:52.085 "flush": true, 00:04:52.085 "reset": true, 00:04:52.085 "nvme_admin": false, 00:04:52.085 "nvme_io": false, 00:04:52.085 "nvme_io_md": false, 00:04:52.085 "write_zeroes": true, 00:04:52.085 "zcopy": true, 00:04:52.085 "get_zone_info": false, 00:04:52.085 "zone_management": false, 00:04:52.085 "zone_append": false, 00:04:52.085 "compare": false, 00:04:52.085 "compare_and_write": false, 00:04:52.085 "abort": true, 00:04:52.085 "seek_hole": false, 00:04:52.085 "seek_data": false, 00:04:52.085 "copy": true, 00:04:52.085 "nvme_iov_md": false 00:04:52.085 }, 00:04:52.085 "memory_domains": [ 00:04:52.085 { 00:04:52.085 "dma_device_id": "system", 00:04:52.085 "dma_device_type": 1 00:04:52.085 }, 00:04:52.085 { 00:04:52.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.085 "dma_device_type": 2 00:04:52.085 } 00:04:52.085 ], 00:04:52.085 "driver_specific": { 00:04:52.085 "passthru": { 00:04:52.085 "name": "Passthru0", 00:04:52.085 "base_bdev_name": "Malloc2" 00:04:52.085 } 00:04:52.085 } 00:04:52.085 } 00:04:52.085 ]' 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.085 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:52.344 08:50:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.344 00:04:52.344 real 0m0.226s 00:04:52.344 user 0m0.153s 00:04:52.344 sys 0m0.021s 00:04:52.344 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.344 08:50:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:52.344 ************************************ 00:04:52.344 END TEST rpc_daemon_integrity 00:04:52.344 ************************************ 00:04:52.345 08:50:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:52.345 08:50:30 rpc -- rpc/rpc.sh@84 -- # killprocess 3633348 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@948 -- # '[' -z 3633348 ']' 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@952 -- # kill -0 3633348 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@953 -- # uname 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3633348 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3633348' 00:04:52.345 killing process with pid 3633348 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@967 -- # kill 3633348 00:04:52.345 08:50:30 rpc -- common/autotest_common.sh@972 -- # wait 3633348 00:04:52.609 00:04:52.609 real 0m1.894s 00:04:52.609 user 0m2.375s 00:04:52.609 sys 0m0.597s 00:04:52.609 08:50:30 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.609 08:50:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.609 ************************************ 00:04:52.609 END TEST rpc 00:04:52.609 ************************************ 00:04:52.609 08:50:30 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.609 08:50:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.609 08:50:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.609 08:50:30 -- common/autotest_common.sh@10 -- # set +x 00:04:52.903 ************************************ 00:04:52.903 START TEST skip_rpc 00:04:52.903 ************************************ 00:04:52.903 08:50:30 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:52.903 * Looking for test storage... 00:04:52.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:52.903 08:50:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:52.903 08:50:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:52.903 08:50:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:52.903 08:50:30 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.903 08:50:30 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.903 08:50:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:52.903 ************************************ 00:04:52.903 START TEST skip_rpc 00:04:52.903 ************************************ 00:04:52.903 08:50:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:52.903 08:50:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3633787 00:04:52.903 08:50:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:52.903 08:50:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.903 08:50:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:52.903 [2024-07-24 08:50:30.863834] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:04:52.903 [2024-07-24 08:50:30.863910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3633787 ] 00:04:52.903 EAL: No free 2048 kB hugepages reported on node 1 00:04:52.903 [2024-07-24 08:50:30.894590] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:52.903 [2024-07-24 08:50:30.924663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.162 [2024-07-24 08:50:31.016881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3633787 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3633787 ']' 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3633787 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3633787 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3633787' 00:04:58.439 killing process with pid 3633787 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3633787 00:04:58.439 08:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3633787 00:04:58.439 00:04:58.439 real 0m5.424s 00:04:58.439 user 0m5.106s 00:04:58.439 sys 0m0.322s 00:04:58.439 08:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.439 08:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.439 ************************************ 00:04:58.439 END TEST skip_rpc 00:04:58.439 ************************************ 00:04:58.439 08:50:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:58.439 08:50:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:58.439 08:50:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.439 08:50:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.439 ************************************ 00:04:58.439 START TEST skip_rpc_with_json 00:04:58.439 ************************************ 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3634483 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3634483 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3634483 ']' 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:58.439 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.439 [2024-07-24 08:50:36.336428] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:04:58.439 [2024-07-24 08:50:36.336541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3634483 ] 00:04:58.439 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.439 [2024-07-24 08:50:36.368321] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:04:58.439 [2024-07-24 08:50:36.394068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.439 [2024-07-24 08:50:36.483318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.699 [2024-07-24 08:50:36.741841] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:58.699 request: 00:04:58.699 { 00:04:58.699 "trtype": "tcp", 00:04:58.699 "method": "nvmf_get_transports", 00:04:58.699 "req_id": 1 00:04:58.699 } 00:04:58.699 Got JSON-RPC error response 00:04:58.699 response: 00:04:58.699 { 00:04:58.699 "code": -19, 00:04:58.699 "message": "No such device" 00:04:58.699 } 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.699 [2024-07-24 08:50:36.749973] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:58.699 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:58.958 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:58.958 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:58.958 { 00:04:58.958 "subsystems": [ 00:04:58.958 { 00:04:58.958 "subsystem": "vfio_user_target", 00:04:58.958 "config": null 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "subsystem": "keyring", 00:04:58.958 "config": [] 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "subsystem": "iobuf", 00:04:58.958 "config": [ 00:04:58.958 { 00:04:58.958 "method": "iobuf_set_options", 00:04:58.958 "params": { 00:04:58.958 "small_pool_count": 8192, 00:04:58.958 "large_pool_count": 1024, 00:04:58.958 "small_bufsize": 8192, 00:04:58.958 "large_bufsize": 135168 00:04:58.958 } 00:04:58.958 } 00:04:58.958 ] 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "subsystem": "sock", 00:04:58.958 "config": [ 00:04:58.958 { 00:04:58.958 "method": "sock_set_default_impl", 00:04:58.958 "params": { 00:04:58.958 "impl_name": "posix" 00:04:58.958 } 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "method": "sock_impl_set_options", 00:04:58.958 "params": { 00:04:58.958 "impl_name": "ssl", 00:04:58.958 "recv_buf_size": 4096, 00:04:58.958 "send_buf_size": 4096, 00:04:58.958 "enable_recv_pipe": true, 00:04:58.958 "enable_quickack": false, 00:04:58.958 "enable_placement_id": 0, 00:04:58.958 "enable_zerocopy_send_server": true, 00:04:58.958 "enable_zerocopy_send_client": false, 00:04:58.958 "zerocopy_threshold": 0, 00:04:58.958 "tls_version": 0, 00:04:58.958 "enable_ktls": false 00:04:58.958 } 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "method": "sock_impl_set_options", 00:04:58.958 "params": { 00:04:58.958 "impl_name": "posix", 00:04:58.958 "recv_buf_size": 2097152, 00:04:58.958 "send_buf_size": 2097152, 00:04:58.958 "enable_recv_pipe": true, 00:04:58.958 "enable_quickack": false, 00:04:58.958 "enable_placement_id": 0, 00:04:58.958 "enable_zerocopy_send_server": true, 00:04:58.958 "enable_zerocopy_send_client": false, 00:04:58.958 "zerocopy_threshold": 0, 00:04:58.958 "tls_version": 0, 00:04:58.958 "enable_ktls": false 00:04:58.958 } 00:04:58.958 } 00:04:58.958 ] 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "subsystem": "vmd", 00:04:58.958 "config": [] 00:04:58.958 }, 00:04:58.958 { 00:04:58.958 "subsystem": "accel", 00:04:58.958 "config": [ 00:04:58.958 { 00:04:58.958 "method": "accel_set_options", 00:04:58.958 "params": { 00:04:58.958 "small_cache_size": 128, 00:04:58.958 "large_cache_size": 16, 00:04:58.958 "task_count": 2048, 00:04:58.958 "sequence_count": 2048, 00:04:58.958 "buf_count": 2048 00:04:58.958 } 00:04:58.958 } 00:04:58.958 ] 00:04:58.958 }, 00:04:58.958 { 00:04:58.959 "subsystem": "bdev", 00:04:58.959 "config": [ 00:04:58.959 { 00:04:58.959 "method": "bdev_set_options", 00:04:58.959 "params": { 00:04:58.959 "bdev_io_pool_size": 65535, 00:04:58.959 "bdev_io_cache_size": 256, 00:04:58.959 "bdev_auto_examine": true, 00:04:58.959 "iobuf_small_cache_size": 128, 00:04:58.959 "iobuf_large_cache_size": 16 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "bdev_raid_set_options", 00:04:58.959 "params": { 00:04:58.959 "process_window_size_kb": 1024, 00:04:58.959 "process_max_bandwidth_mb_sec": 0 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "bdev_iscsi_set_options", 00:04:58.959 "params": { 00:04:58.959 "timeout_sec": 30 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "bdev_nvme_set_options", 00:04:58.959 "params": { 00:04:58.959 "action_on_timeout": "none", 00:04:58.959 "timeout_us": 0, 00:04:58.959 "timeout_admin_us": 0, 00:04:58.959 "keep_alive_timeout_ms": 10000, 00:04:58.959 "arbitration_burst": 0, 00:04:58.959 "low_priority_weight": 0, 00:04:58.959 "medium_priority_weight": 0, 00:04:58.959 "high_priority_weight": 0, 00:04:58.959 "nvme_adminq_poll_period_us": 10000, 00:04:58.959 "nvme_ioq_poll_period_us": 0, 00:04:58.959 "io_queue_requests": 0, 00:04:58.959 "delay_cmd_submit": true, 00:04:58.959 "transport_retry_count": 4, 00:04:58.959 "bdev_retry_count": 3, 00:04:58.959 "transport_ack_timeout": 0, 00:04:58.959 "ctrlr_loss_timeout_sec": 0, 00:04:58.959 "reconnect_delay_sec": 0, 00:04:58.959 "fast_io_fail_timeout_sec": 0, 00:04:58.959 "disable_auto_failback": false, 00:04:58.959 "generate_uuids": false, 00:04:58.959 "transport_tos": 0, 00:04:58.959 "nvme_error_stat": false, 00:04:58.959 "rdma_srq_size": 0, 00:04:58.959 "io_path_stat": false, 00:04:58.959 "allow_accel_sequence": false, 00:04:58.959 "rdma_max_cq_size": 0, 00:04:58.959 "rdma_cm_event_timeout_ms": 0, 00:04:58.959 "dhchap_digests": [ 00:04:58.959 "sha256", 00:04:58.959 "sha384", 00:04:58.959 "sha512" 00:04:58.959 ], 00:04:58.959 "dhchap_dhgroups": [ 00:04:58.959 "null", 00:04:58.959 "ffdhe2048", 00:04:58.959 "ffdhe3072", 00:04:58.959 "ffdhe4096", 00:04:58.959 "ffdhe6144", 00:04:58.959 "ffdhe8192" 00:04:58.959 ] 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "bdev_nvme_set_hotplug", 00:04:58.959 "params": { 00:04:58.959 "period_us": 100000, 00:04:58.959 "enable": false 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "bdev_wait_for_examine" 00:04:58.959 } 00:04:58.959 ] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "scsi", 00:04:58.959 "config": null 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "scheduler", 00:04:58.959 "config": [ 00:04:58.959 { 00:04:58.959 "method": "framework_set_scheduler", 00:04:58.959 "params": { 00:04:58.959 "name": "static" 00:04:58.959 } 00:04:58.959 } 00:04:58.959 ] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "vhost_scsi", 00:04:58.959 "config": [] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "vhost_blk", 00:04:58.959 "config": [] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "ublk", 00:04:58.959 "config": [] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "nbd", 00:04:58.959 "config": [] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "nvmf", 00:04:58.959 "config": [ 00:04:58.959 { 00:04:58.959 "method": "nvmf_set_config", 00:04:58.959 "params": { 00:04:58.959 "discovery_filter": "match_any", 00:04:58.959 "admin_cmd_passthru": { 00:04:58.959 "identify_ctrlr": false 00:04:58.959 } 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "nvmf_set_max_subsystems", 00:04:58.959 "params": { 00:04:58.959 "max_subsystems": 1024 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "nvmf_set_crdt", 00:04:58.959 "params": { 00:04:58.959 "crdt1": 0, 00:04:58.959 "crdt2": 0, 00:04:58.959 "crdt3": 0 00:04:58.959 } 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "method": "nvmf_create_transport", 00:04:58.959 "params": { 00:04:58.959 "trtype": "TCP", 00:04:58.959 "max_queue_depth": 128, 00:04:58.959 "max_io_qpairs_per_ctrlr": 127, 00:04:58.959 "in_capsule_data_size": 4096, 00:04:58.959 "max_io_size": 131072, 00:04:58.959 "io_unit_size": 131072, 00:04:58.959 "max_aq_depth": 128, 00:04:58.959 "num_shared_buffers": 511, 00:04:58.959 "buf_cache_size": 4294967295, 00:04:58.959 "dif_insert_or_strip": false, 00:04:58.959 "zcopy": false, 00:04:58.959 "c2h_success": true, 00:04:58.959 "sock_priority": 0, 00:04:58.959 "abort_timeout_sec": 1, 00:04:58.959 "ack_timeout": 0, 00:04:58.959 "data_wr_pool_size": 0 00:04:58.959 } 00:04:58.959 } 00:04:58.959 ] 00:04:58.959 }, 00:04:58.959 { 00:04:58.959 "subsystem": "iscsi", 00:04:58.959 "config": [ 00:04:58.959 { 00:04:58.959 "method": "iscsi_set_options", 00:04:58.959 "params": { 00:04:58.959 "node_base": "iqn.2016-06.io.spdk", 00:04:58.959 "max_sessions": 128, 00:04:58.959 "max_connections_per_session": 2, 00:04:58.959 "max_queue_depth": 64, 00:04:58.959 "default_time2wait": 2, 00:04:58.959 "default_time2retain": 20, 00:04:58.959 "first_burst_length": 8192, 00:04:58.959 "immediate_data": true, 00:04:58.959 "allow_duplicated_isid": false, 00:04:58.959 "error_recovery_level": 0, 00:04:58.959 "nop_timeout": 60, 00:04:58.959 "nop_in_interval": 30, 00:04:58.959 "disable_chap": false, 00:04:58.959 "require_chap": false, 00:04:58.959 "mutual_chap": false, 00:04:58.959 "chap_group": 0, 00:04:58.959 "max_large_datain_per_connection": 64, 00:04:58.959 "max_r2t_per_connection": 4, 00:04:58.959 "pdu_pool_size": 36864, 00:04:58.959 "immediate_data_pool_size": 16384, 00:04:58.959 "data_out_pool_size": 2048 00:04:58.959 } 00:04:58.959 } 00:04:58.959 ] 00:04:58.959 } 00:04:58.959 ] 00:04:58.959 } 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3634483 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3634483 ']' 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3634483 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3634483 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3634483' 00:04:58.959 killing process with pid 3634483 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3634483 00:04:58.959 08:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3634483 00:04:59.218 08:50:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3634622 00:04:59.218 08:50:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:59.218 08:50:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3634622 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3634622 ']' 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3634622 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3634622 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3634622' 00:05:04.490 killing process with pid 3634622 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3634622 00:05:04.490 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3634622 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:04.749 00:05:04.749 real 0m6.487s 00:05:04.749 user 0m6.077s 00:05:04.749 sys 0m0.697s 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:04.749 ************************************ 00:05:04.749 END TEST skip_rpc_with_json 00:05:04.749 ************************************ 00:05:04.749 08:50:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:04.749 08:50:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:04.749 08:50:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.749 08:50:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.749 ************************************ 00:05:04.749 START TEST skip_rpc_with_delay 00:05:04.749 ************************************ 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:04.749 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:05.007 [2024-07-24 08:50:42.876899] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:05.007 [2024-07-24 08:50:42.877001] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:05.007 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:05.007 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.008 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:05.008 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.008 00:05:05.008 real 0m0.068s 00:05:05.008 user 0m0.049s 00:05:05.008 sys 0m0.019s 00:05:05.008 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:05.008 08:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:05.008 ************************************ 00:05:05.008 END TEST skip_rpc_with_delay 00:05:05.008 ************************************ 00:05:05.008 08:50:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:05.008 08:50:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:05.008 08:50:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:05.008 08:50:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.008 08:50:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.008 08:50:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.008 ************************************ 00:05:05.008 START TEST exit_on_failed_rpc_init 00:05:05.008 ************************************ 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3635330 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3635330 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3635330 ']' 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.008 08:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:05.008 [2024-07-24 08:50:42.994171] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:05.008 [2024-07-24 08:50:42.994252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635330 ] 00:05:05.008 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.008 [2024-07-24 08:50:43.024853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.008 [2024-07-24 08:50:43.054942] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.267 [2024-07-24 08:50:43.146863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:05.527 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:05.527 [2024-07-24 08:50:43.470761] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:05.527 [2024-07-24 08:50:43.470835] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635346 ] 00:05:05.527 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.527 [2024-07-24 08:50:43.500278] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:05.527 [2024-07-24 08:50:43.531393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.527 [2024-07-24 08:50:43.624712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.527 [2024-07-24 08:50:43.624822] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:05.527 [2024-07-24 08:50:43.624843] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:05.527 [2024-07-24 08:50:43.624856] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3635330 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3635330 ']' 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3635330 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3635330 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3635330' 00:05:05.787 killing process with pid 3635330 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3635330 00:05:05.787 08:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3635330 00:05:06.046 00:05:06.046 real 0m1.213s 00:05:06.046 user 0m1.299s 00:05:06.046 sys 0m0.467s 00:05:06.046 08:50:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.046 08:50:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.046 ************************************ 00:05:06.046 END TEST exit_on_failed_rpc_init 00:05:06.046 ************************************ 00:05:06.305 08:50:44 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:06.305 00:05:06.305 real 0m13.447s 00:05:06.305 user 0m12.636s 00:05:06.305 sys 0m1.671s 00:05:06.305 08:50:44 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.305 08:50:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.305 ************************************ 00:05:06.305 END TEST skip_rpc 00:05:06.305 ************************************ 00:05:06.305 08:50:44 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.305 08:50:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.305 08:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.305 08:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:06.305 ************************************ 00:05:06.305 START TEST rpc_client 00:05:06.305 ************************************ 00:05:06.305 08:50:44 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:06.305 * Looking for test storage... 00:05:06.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:06.305 08:50:44 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:06.305 OK 00:05:06.305 08:50:44 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.305 00:05:06.305 real 0m0.063s 00:05:06.305 user 0m0.028s 00:05:06.305 sys 0m0.040s 00:05:06.305 08:50:44 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.305 08:50:44 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.305 ************************************ 00:05:06.305 END TEST rpc_client 00:05:06.305 ************************************ 00:05:06.305 08:50:44 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.305 08:50:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.305 08:50:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.305 08:50:44 -- common/autotest_common.sh@10 -- # set +x 00:05:06.305 ************************************ 00:05:06.305 START TEST json_config 00:05:06.305 ************************************ 00:05:06.305 08:50:44 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:06.305 08:50:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:06.305 08:50:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.305 08:50:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.305 08:50:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.305 08:50:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.305 08:50:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.305 08:50:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.305 08:50:44 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.305 08:50:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@47 -- # : 0 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:06.305 08:50:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:06.305 08:50:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:06.306 INFO: JSON configuration test init 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.306 08:50:44 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:06.306 08:50:44 json_config -- json_config/common.sh@9 -- # local app=target 00:05:06.306 08:50:44 json_config -- json_config/common.sh@10 -- # shift 00:05:06.306 08:50:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:06.306 08:50:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:06.306 08:50:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:06.306 08:50:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.306 08:50:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:06.306 08:50:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3635588 00:05:06.306 08:50:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:06.306 08:50:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:06.306 Waiting for target to run... 00:05:06.306 08:50:44 json_config -- json_config/common.sh@25 -- # waitforlisten 3635588 /var/tmp/spdk_tgt.sock 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@829 -- # '[' -z 3635588 ']' 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:06.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.306 08:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.566 [2024-07-24 08:50:44.444563] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:06.566 [2024-07-24 08:50:44.444659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3635588 ] 00:05:06.566 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.825 [2024-07-24 08:50:44.750128] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:06.825 [2024-07-24 08:50:44.783374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.825 [2024-07-24 08:50:44.850323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.393 08:50:45 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.394 08:50:45 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:07.394 08:50:45 json_config -- json_config/common.sh@26 -- # echo '' 00:05:07.394 00:05:07.394 08:50:45 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:07.394 08:50:45 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:07.394 08:50:45 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:07.394 08:50:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.394 08:50:45 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:07.394 08:50:45 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:07.394 08:50:45 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:07.394 08:50:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.394 08:50:45 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:07.394 08:50:45 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:07.394 08:50:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:10.686 08:50:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.686 08:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:10.686 08:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@51 -- # sort 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:10.686 08:50:48 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:10.686 08:50:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.686 08:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:10.946 08:50:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.946 08:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:10.946 08:50:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.946 08:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:10.946 MallocForNvmf0 00:05:10.946 08:50:49 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:10.946 08:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:11.205 MallocForNvmf1 00:05:11.205 08:50:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:11.205 08:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:11.463 [2024-07-24 08:50:49.538247] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.463 08:50:49 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.463 08:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:11.721 08:50:49 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.721 08:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:11.979 08:50:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:11.979 08:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:12.237 08:50:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:12.237 08:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:12.495 [2024-07-24 08:50:50.529479] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:12.495 08:50:50 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:12.495 08:50:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.495 08:50:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.495 08:50:50 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:12.495 08:50:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.495 08:50:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.495 08:50:50 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:12.495 08:50:50 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.495 08:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:12.753 MallocBdevForConfigChangeCheck 00:05:12.753 08:50:50 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:12.753 08:50:50 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:12.753 08:50:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.753 08:50:50 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:12.753 08:50:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:13.321 08:50:51 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:13.321 INFO: shutting down applications... 00:05:13.321 08:50:51 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:13.321 08:50:51 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:13.321 08:50:51 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:13.321 08:50:51 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:15.228 Calling clear_iscsi_subsystem 00:05:15.228 Calling clear_nvmf_subsystem 00:05:15.228 Calling clear_nbd_subsystem 00:05:15.229 Calling clear_ublk_subsystem 00:05:15.229 Calling clear_vhost_blk_subsystem 00:05:15.229 Calling clear_vhost_scsi_subsystem 00:05:15.229 Calling clear_bdev_subsystem 00:05:15.229 08:50:52 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:15.229 08:50:52 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:15.229 08:50:52 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:15.229 08:50:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:15.229 08:50:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:15.229 08:50:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:15.229 08:50:53 json_config -- json_config/json_config.sh@349 -- # break 00:05:15.229 08:50:53 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:15.229 08:50:53 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:15.229 08:50:53 json_config -- json_config/common.sh@31 -- # local app=target 00:05:15.229 08:50:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:15.229 08:50:53 json_config -- json_config/common.sh@35 -- # [[ -n 3635588 ]] 00:05:15.229 08:50:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3635588 00:05:15.229 08:50:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:15.229 08:50:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.229 08:50:53 json_config -- json_config/common.sh@41 -- # kill -0 3635588 00:05:15.229 08:50:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:15.803 08:50:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:15.803 08:50:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:15.803 08:50:53 json_config -- json_config/common.sh@41 -- # kill -0 3635588 00:05:15.803 08:50:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:15.803 08:50:53 json_config -- json_config/common.sh@43 -- # break 00:05:15.803 08:50:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:15.803 08:50:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:15.803 SPDK target shutdown done 00:05:15.803 08:50:53 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:15.803 INFO: relaunching applications... 00:05:15.803 08:50:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.803 08:50:53 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.803 08:50:53 json_config -- json_config/common.sh@10 -- # shift 00:05:15.803 08:50:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.803 08:50:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.803 08:50:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.803 08:50:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.803 08:50:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.803 08:50:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3636798 00:05:15.803 08:50:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:15.803 08:50:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.803 Waiting for target to run... 00:05:15.803 08:50:53 json_config -- json_config/common.sh@25 -- # waitforlisten 3636798 /var/tmp/spdk_tgt.sock 00:05:15.803 08:50:53 json_config -- common/autotest_common.sh@829 -- # '[' -z 3636798 ']' 00:05:15.803 08:50:53 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.803 08:50:53 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.803 08:50:53 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.803 08:50:53 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.803 08:50:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.803 [2024-07-24 08:50:53.804243] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:15.803 [2024-07-24 08:50:53.804331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3636798 ] 00:05:15.803 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.370 [2024-07-24 08:50:54.291226] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:16.370 [2024-07-24 08:50:54.324906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.370 [2024-07-24 08:50:54.408283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.745 [2024-07-24 08:50:57.440075] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:19.745 [2024-07-24 08:50:57.472568] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:20.313 08:50:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.313 08:50:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:20.313 08:50:58 json_config -- json_config/common.sh@26 -- # echo '' 00:05:20.313 00:05:20.313 08:50:58 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:20.313 08:50:58 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:20.313 INFO: Checking if target configuration is the same... 00:05:20.313 08:50:58 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.314 08:50:58 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:20.314 08:50:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.314 + '[' 2 -ne 2 ']' 00:05:20.314 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.314 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.314 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.314 +++ basename /dev/fd/62 00:05:20.314 ++ mktemp /tmp/62.XXX 00:05:20.314 + tmp_file_1=/tmp/62.Zzh 00:05:20.314 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.314 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.314 + tmp_file_2=/tmp/spdk_tgt_config.json.YW6 00:05:20.314 + ret=0 00:05:20.314 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.572 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:20.572 + diff -u /tmp/62.Zzh /tmp/spdk_tgt_config.json.YW6 00:05:20.572 + echo 'INFO: JSON config files are the same' 00:05:20.572 INFO: JSON config files are the same 00:05:20.572 + rm /tmp/62.Zzh /tmp/spdk_tgt_config.json.YW6 00:05:20.572 + exit 0 00:05:20.572 08:50:58 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:20.572 08:50:58 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:20.572 INFO: changing configuration and checking if this can be detected... 00:05:20.572 08:50:58 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.572 08:50:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:20.830 08:50:58 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.830 08:50:58 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:20.830 08:50:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:20.830 + '[' 2 -ne 2 ']' 00:05:20.830 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:20.830 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:20.830 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:20.830 +++ basename /dev/fd/62 00:05:20.830 ++ mktemp /tmp/62.XXX 00:05:20.830 + tmp_file_1=/tmp/62.UrI 00:05:20.830 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:20.830 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:20.830 + tmp_file_2=/tmp/spdk_tgt_config.json.QNU 00:05:20.830 + ret=0 00:05:20.830 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.399 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:21.399 + diff -u /tmp/62.UrI /tmp/spdk_tgt_config.json.QNU 00:05:21.399 + ret=1 00:05:21.399 + echo '=== Start of file: /tmp/62.UrI ===' 00:05:21.399 + cat /tmp/62.UrI 00:05:21.399 + echo '=== End of file: /tmp/62.UrI ===' 00:05:21.399 + echo '' 00:05:21.399 + echo '=== Start of file: /tmp/spdk_tgt_config.json.QNU ===' 00:05:21.399 + cat /tmp/spdk_tgt_config.json.QNU 00:05:21.399 + echo '=== End of file: /tmp/spdk_tgt_config.json.QNU ===' 00:05:21.399 + echo '' 00:05:21.399 + rm /tmp/62.UrI /tmp/spdk_tgt_config.json.QNU 00:05:21.399 + exit 1 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:21.399 INFO: configuration change detected. 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@321 -- # [[ -n 3636798 ]] 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.399 08:50:59 json_config -- json_config/json_config.sh@327 -- # killprocess 3636798 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@948 -- # '[' -z 3636798 ']' 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@952 -- # kill -0 3636798 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@953 -- # uname 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3636798 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3636798' 00:05:21.399 killing process with pid 3636798 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@967 -- # kill 3636798 00:05:21.399 08:50:59 json_config -- common/autotest_common.sh@972 -- # wait 3636798 00:05:23.305 08:51:00 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.305 08:51:00 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:23.305 08:51:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:23.305 08:51:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.305 08:51:00 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:23.305 08:51:00 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:23.305 INFO: Success 00:05:23.305 00:05:23.305 real 0m16.597s 00:05:23.305 user 0m18.498s 00:05:23.305 sys 0m2.056s 00:05:23.305 08:51:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.305 08:51:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:23.305 ************************************ 00:05:23.305 END TEST json_config 00:05:23.305 ************************************ 00:05:23.305 08:51:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.305 08:51:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.305 08:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.305 08:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:23.305 ************************************ 00:05:23.305 START TEST json_config_extra_key 00:05:23.305 ************************************ 00:05:23.305 08:51:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:23.305 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.305 08:51:01 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.305 08:51:01 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.305 08:51:01 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.305 08:51:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.305 08:51:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.305 08:51:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.305 08:51:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:23.305 08:51:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:23.305 08:51:01 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:23.305 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:23.305 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:23.305 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:23.306 INFO: launching applications... 00:05:23.306 08:51:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3637881 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:23.306 Waiting for target to run... 00:05:23.306 08:51:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3637881 /var/tmp/spdk_tgt.sock 00:05:23.306 08:51:01 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3637881 ']' 00:05:23.306 08:51:01 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:23.306 08:51:01 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:23.306 08:51:01 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:23.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:23.306 08:51:01 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:23.306 08:51:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:23.306 [2024-07-24 08:51:01.084259] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:23.306 [2024-07-24 08:51:01.084356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3637881 ] 00:05:23.306 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.564 [2024-07-24 08:51:01.557668] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:23.564 [2024-07-24 08:51:01.591483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.564 [2024-07-24 08:51:01.669957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.132 08:51:02 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:24.132 08:51:02 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:24.132 00:05:24.132 08:51:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:24.132 INFO: shutting down applications... 00:05:24.132 08:51:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3637881 ]] 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3637881 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3637881 00:05:24.132 08:51:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3637881 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.698 08:51:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.698 SPDK target shutdown done 00:05:24.698 08:51:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:24.698 Success 00:05:24.698 00:05:24.698 real 0m1.561s 00:05:24.698 user 0m1.375s 00:05:24.698 sys 0m0.594s 00:05:24.698 08:51:02 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.698 08:51:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.698 ************************************ 00:05:24.698 END TEST json_config_extra_key 00:05:24.698 ************************************ 00:05:24.698 08:51:02 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.698 08:51:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:24.698 08:51:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.698 08:51:02 -- common/autotest_common.sh@10 -- # set +x 00:05:24.698 ************************************ 00:05:24.698 START TEST alias_rpc 00:05:24.698 ************************************ 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:24.698 * Looking for test storage... 00:05:24.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:24.698 08:51:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:24.698 08:51:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3638129 00:05:24.698 08:51:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:24.698 08:51:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3638129 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3638129 ']' 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:24.698 08:51:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.698 [2024-07-24 08:51:02.698965] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:24.698 [2024-07-24 08:51:02.699068] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638129 ] 00:05:24.698 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.698 [2024-07-24 08:51:02.732718] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:24.698 [2024-07-24 08:51:02.760310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.958 [2024-07-24 08:51:02.847033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.217 08:51:03 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:25.217 08:51:03 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:25.217 08:51:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:25.476 08:51:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3638129 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3638129 ']' 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3638129 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638129 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638129' 00:05:25.476 killing process with pid 3638129 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@967 -- # kill 3638129 00:05:25.476 08:51:03 alias_rpc -- common/autotest_common.sh@972 -- # wait 3638129 00:05:25.733 00:05:25.733 real 0m1.219s 00:05:25.733 user 0m1.299s 00:05:25.733 sys 0m0.437s 00:05:25.733 08:51:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.733 08:51:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.733 ************************************ 00:05:25.733 END TEST alias_rpc 00:05:25.733 ************************************ 00:05:25.733 08:51:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:25.733 08:51:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.733 08:51:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.734 08:51:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.734 08:51:03 -- common/autotest_common.sh@10 -- # set +x 00:05:25.991 ************************************ 00:05:25.991 START TEST spdkcli_tcp 00:05:25.991 ************************************ 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:25.991 * Looking for test storage... 00:05:25.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3638432 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.991 08:51:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3638432 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3638432 ']' 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.991 08:51:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.991 [2024-07-24 08:51:03.964218] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:25.991 [2024-07-24 08:51:03.964313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638432 ] 00:05:25.991 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.991 [2024-07-24 08:51:03.996838] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:25.991 [2024-07-24 08:51:04.024605] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.249 [2024-07-24 08:51:04.112059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.249 [2024-07-24 08:51:04.112062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.508 08:51:04 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.508 08:51:04 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:26.508 08:51:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3638442 00:05:26.508 08:51:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.508 08:51:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.508 [ 00:05:26.508 "bdev_malloc_delete", 00:05:26.508 "bdev_malloc_create", 00:05:26.508 "bdev_null_resize", 00:05:26.508 "bdev_null_delete", 00:05:26.508 "bdev_null_create", 00:05:26.508 "bdev_nvme_cuse_unregister", 00:05:26.508 "bdev_nvme_cuse_register", 00:05:26.508 "bdev_opal_new_user", 00:05:26.508 "bdev_opal_set_lock_state", 00:05:26.508 "bdev_opal_delete", 00:05:26.508 "bdev_opal_get_info", 00:05:26.508 "bdev_opal_create", 00:05:26.508 "bdev_nvme_opal_revert", 00:05:26.508 "bdev_nvme_opal_init", 00:05:26.508 "bdev_nvme_send_cmd", 00:05:26.509 "bdev_nvme_get_path_iostat", 00:05:26.509 "bdev_nvme_get_mdns_discovery_info", 00:05:26.509 "bdev_nvme_stop_mdns_discovery", 00:05:26.509 "bdev_nvme_start_mdns_discovery", 00:05:26.509 "bdev_nvme_set_multipath_policy", 00:05:26.509 "bdev_nvme_set_preferred_path", 00:05:26.509 "bdev_nvme_get_io_paths", 00:05:26.509 "bdev_nvme_remove_error_injection", 00:05:26.509 "bdev_nvme_add_error_injection", 00:05:26.509 "bdev_nvme_get_discovery_info", 00:05:26.509 "bdev_nvme_stop_discovery", 00:05:26.509 "bdev_nvme_start_discovery", 00:05:26.509 "bdev_nvme_get_controller_health_info", 00:05:26.509 "bdev_nvme_disable_controller", 00:05:26.509 "bdev_nvme_enable_controller", 00:05:26.509 "bdev_nvme_reset_controller", 00:05:26.509 "bdev_nvme_get_transport_statistics", 00:05:26.509 "bdev_nvme_apply_firmware", 00:05:26.509 "bdev_nvme_detach_controller", 00:05:26.509 "bdev_nvme_get_controllers", 00:05:26.509 "bdev_nvme_attach_controller", 00:05:26.509 "bdev_nvme_set_hotplug", 00:05:26.509 "bdev_nvme_set_options", 00:05:26.509 "bdev_passthru_delete", 00:05:26.509 "bdev_passthru_create", 00:05:26.509 "bdev_lvol_set_parent_bdev", 00:05:26.509 "bdev_lvol_set_parent", 00:05:26.509 "bdev_lvol_check_shallow_copy", 00:05:26.509 "bdev_lvol_start_shallow_copy", 00:05:26.509 "bdev_lvol_grow_lvstore", 00:05:26.509 "bdev_lvol_get_lvols", 00:05:26.509 "bdev_lvol_get_lvstores", 00:05:26.509 "bdev_lvol_delete", 00:05:26.509 "bdev_lvol_set_read_only", 00:05:26.509 "bdev_lvol_resize", 00:05:26.509 "bdev_lvol_decouple_parent", 00:05:26.509 "bdev_lvol_inflate", 00:05:26.509 "bdev_lvol_rename", 00:05:26.509 "bdev_lvol_clone_bdev", 00:05:26.509 "bdev_lvol_clone", 00:05:26.509 "bdev_lvol_snapshot", 00:05:26.509 "bdev_lvol_create", 00:05:26.509 "bdev_lvol_delete_lvstore", 00:05:26.509 "bdev_lvol_rename_lvstore", 00:05:26.509 "bdev_lvol_create_lvstore", 00:05:26.509 "bdev_raid_set_options", 00:05:26.509 "bdev_raid_remove_base_bdev", 00:05:26.509 "bdev_raid_add_base_bdev", 00:05:26.509 "bdev_raid_delete", 00:05:26.509 "bdev_raid_create", 00:05:26.509 "bdev_raid_get_bdevs", 00:05:26.509 "bdev_error_inject_error", 00:05:26.509 "bdev_error_delete", 00:05:26.509 "bdev_error_create", 00:05:26.509 "bdev_split_delete", 00:05:26.509 "bdev_split_create", 00:05:26.509 "bdev_delay_delete", 00:05:26.509 "bdev_delay_create", 00:05:26.509 "bdev_delay_update_latency", 00:05:26.509 "bdev_zone_block_delete", 00:05:26.509 "bdev_zone_block_create", 00:05:26.509 "blobfs_create", 00:05:26.509 "blobfs_detect", 00:05:26.509 "blobfs_set_cache_size", 00:05:26.509 "bdev_aio_delete", 00:05:26.509 "bdev_aio_rescan", 00:05:26.509 "bdev_aio_create", 00:05:26.509 "bdev_ftl_set_property", 00:05:26.509 "bdev_ftl_get_properties", 00:05:26.509 "bdev_ftl_get_stats", 00:05:26.509 "bdev_ftl_unmap", 00:05:26.509 "bdev_ftl_unload", 00:05:26.509 "bdev_ftl_delete", 00:05:26.509 "bdev_ftl_load", 00:05:26.509 "bdev_ftl_create", 00:05:26.509 "bdev_virtio_attach_controller", 00:05:26.509 "bdev_virtio_scsi_get_devices", 00:05:26.509 "bdev_virtio_detach_controller", 00:05:26.509 "bdev_virtio_blk_set_hotplug", 00:05:26.509 "bdev_iscsi_delete", 00:05:26.509 "bdev_iscsi_create", 00:05:26.509 "bdev_iscsi_set_options", 00:05:26.509 "accel_error_inject_error", 00:05:26.509 "ioat_scan_accel_module", 00:05:26.509 "dsa_scan_accel_module", 00:05:26.509 "iaa_scan_accel_module", 00:05:26.509 "vfu_virtio_create_scsi_endpoint", 00:05:26.509 "vfu_virtio_scsi_remove_target", 00:05:26.509 "vfu_virtio_scsi_add_target", 00:05:26.509 "vfu_virtio_create_blk_endpoint", 00:05:26.509 "vfu_virtio_delete_endpoint", 00:05:26.509 "keyring_file_remove_key", 00:05:26.509 "keyring_file_add_key", 00:05:26.509 "keyring_linux_set_options", 00:05:26.509 "iscsi_get_histogram", 00:05:26.509 "iscsi_enable_histogram", 00:05:26.509 "iscsi_set_options", 00:05:26.509 "iscsi_get_auth_groups", 00:05:26.509 "iscsi_auth_group_remove_secret", 00:05:26.509 "iscsi_auth_group_add_secret", 00:05:26.509 "iscsi_delete_auth_group", 00:05:26.509 "iscsi_create_auth_group", 00:05:26.509 "iscsi_set_discovery_auth", 00:05:26.509 "iscsi_get_options", 00:05:26.509 "iscsi_target_node_request_logout", 00:05:26.509 "iscsi_target_node_set_redirect", 00:05:26.509 "iscsi_target_node_set_auth", 00:05:26.509 "iscsi_target_node_add_lun", 00:05:26.509 "iscsi_get_stats", 00:05:26.509 "iscsi_get_connections", 00:05:26.509 "iscsi_portal_group_set_auth", 00:05:26.509 "iscsi_start_portal_group", 00:05:26.509 "iscsi_delete_portal_group", 00:05:26.509 "iscsi_create_portal_group", 00:05:26.509 "iscsi_get_portal_groups", 00:05:26.509 "iscsi_delete_target_node", 00:05:26.509 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.509 "iscsi_target_node_add_pg_ig_maps", 00:05:26.509 "iscsi_create_target_node", 00:05:26.509 "iscsi_get_target_nodes", 00:05:26.509 "iscsi_delete_initiator_group", 00:05:26.509 "iscsi_initiator_group_remove_initiators", 00:05:26.509 "iscsi_initiator_group_add_initiators", 00:05:26.509 "iscsi_create_initiator_group", 00:05:26.509 "iscsi_get_initiator_groups", 00:05:26.509 "nvmf_set_crdt", 00:05:26.509 "nvmf_set_config", 00:05:26.509 "nvmf_set_max_subsystems", 00:05:26.509 "nvmf_stop_mdns_prr", 00:05:26.509 "nvmf_publish_mdns_prr", 00:05:26.509 "nvmf_subsystem_get_listeners", 00:05:26.509 "nvmf_subsystem_get_qpairs", 00:05:26.509 "nvmf_subsystem_get_controllers", 00:05:26.509 "nvmf_get_stats", 00:05:26.509 "nvmf_get_transports", 00:05:26.509 "nvmf_create_transport", 00:05:26.509 "nvmf_get_targets", 00:05:26.509 "nvmf_delete_target", 00:05:26.509 "nvmf_create_target", 00:05:26.509 "nvmf_subsystem_allow_any_host", 00:05:26.509 "nvmf_subsystem_remove_host", 00:05:26.509 "nvmf_subsystem_add_host", 00:05:26.509 "nvmf_ns_remove_host", 00:05:26.509 "nvmf_ns_add_host", 00:05:26.509 "nvmf_subsystem_remove_ns", 00:05:26.509 "nvmf_subsystem_add_ns", 00:05:26.509 "nvmf_subsystem_listener_set_ana_state", 00:05:26.509 "nvmf_discovery_get_referrals", 00:05:26.509 "nvmf_discovery_remove_referral", 00:05:26.509 "nvmf_discovery_add_referral", 00:05:26.509 "nvmf_subsystem_remove_listener", 00:05:26.509 "nvmf_subsystem_add_listener", 00:05:26.509 "nvmf_delete_subsystem", 00:05:26.509 "nvmf_create_subsystem", 00:05:26.509 "nvmf_get_subsystems", 00:05:26.509 "env_dpdk_get_mem_stats", 00:05:26.509 "nbd_get_disks", 00:05:26.509 "nbd_stop_disk", 00:05:26.509 "nbd_start_disk", 00:05:26.509 "ublk_recover_disk", 00:05:26.509 "ublk_get_disks", 00:05:26.509 "ublk_stop_disk", 00:05:26.509 "ublk_start_disk", 00:05:26.509 "ublk_destroy_target", 00:05:26.509 "ublk_create_target", 00:05:26.509 "virtio_blk_create_transport", 00:05:26.509 "virtio_blk_get_transports", 00:05:26.509 "vhost_controller_set_coalescing", 00:05:26.509 "vhost_get_controllers", 00:05:26.509 "vhost_delete_controller", 00:05:26.509 "vhost_create_blk_controller", 00:05:26.509 "vhost_scsi_controller_remove_target", 00:05:26.509 "vhost_scsi_controller_add_target", 00:05:26.509 "vhost_start_scsi_controller", 00:05:26.509 "vhost_create_scsi_controller", 00:05:26.509 "thread_set_cpumask", 00:05:26.509 "framework_get_governor", 00:05:26.509 "framework_get_scheduler", 00:05:26.509 "framework_set_scheduler", 00:05:26.509 "framework_get_reactors", 00:05:26.509 "thread_get_io_channels", 00:05:26.509 "thread_get_pollers", 00:05:26.509 "thread_get_stats", 00:05:26.509 "framework_monitor_context_switch", 00:05:26.509 "spdk_kill_instance", 00:05:26.509 "log_enable_timestamps", 00:05:26.509 "log_get_flags", 00:05:26.509 "log_clear_flag", 00:05:26.509 "log_set_flag", 00:05:26.509 "log_get_level", 00:05:26.509 "log_set_level", 00:05:26.509 "log_get_print_level", 00:05:26.509 "log_set_print_level", 00:05:26.509 "framework_enable_cpumask_locks", 00:05:26.509 "framework_disable_cpumask_locks", 00:05:26.509 "framework_wait_init", 00:05:26.509 "framework_start_init", 00:05:26.509 "scsi_get_devices", 00:05:26.509 "bdev_get_histogram", 00:05:26.509 "bdev_enable_histogram", 00:05:26.509 "bdev_set_qos_limit", 00:05:26.509 "bdev_set_qd_sampling_period", 00:05:26.509 "bdev_get_bdevs", 00:05:26.509 "bdev_reset_iostat", 00:05:26.509 "bdev_get_iostat", 00:05:26.509 "bdev_examine", 00:05:26.509 "bdev_wait_for_examine", 00:05:26.509 "bdev_set_options", 00:05:26.509 "notify_get_notifications", 00:05:26.509 "notify_get_types", 00:05:26.509 "accel_get_stats", 00:05:26.509 "accel_set_options", 00:05:26.509 "accel_set_driver", 00:05:26.509 "accel_crypto_key_destroy", 00:05:26.509 "accel_crypto_keys_get", 00:05:26.509 "accel_crypto_key_create", 00:05:26.509 "accel_assign_opc", 00:05:26.509 "accel_get_module_info", 00:05:26.509 "accel_get_opc_assignments", 00:05:26.509 "vmd_rescan", 00:05:26.509 "vmd_remove_device", 00:05:26.509 "vmd_enable", 00:05:26.509 "sock_get_default_impl", 00:05:26.509 "sock_set_default_impl", 00:05:26.509 "sock_impl_set_options", 00:05:26.509 "sock_impl_get_options", 00:05:26.509 "iobuf_get_stats", 00:05:26.509 "iobuf_set_options", 00:05:26.509 "keyring_get_keys", 00:05:26.509 "framework_get_pci_devices", 00:05:26.509 "framework_get_config", 00:05:26.509 "framework_get_subsystems", 00:05:26.509 "vfu_tgt_set_base_path", 00:05:26.509 "trace_get_info", 00:05:26.509 "trace_get_tpoint_group_mask", 00:05:26.509 "trace_disable_tpoint_group", 00:05:26.510 "trace_enable_tpoint_group", 00:05:26.510 "trace_clear_tpoint_mask", 00:05:26.510 "trace_set_tpoint_mask", 00:05:26.510 "spdk_get_version", 00:05:26.510 "rpc_get_methods" 00:05:26.510 ] 00:05:26.510 08:51:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.510 08:51:04 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:26.510 08:51:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.769 08:51:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.769 08:51:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3638432 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3638432 ']' 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3638432 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638432 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638432' 00:05:26.769 killing process with pid 3638432 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3638432 00:05:26.769 08:51:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3638432 00:05:27.029 00:05:27.029 real 0m1.203s 00:05:27.029 user 0m2.132s 00:05:27.029 sys 0m0.450s 00:05:27.029 08:51:05 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.029 08:51:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.029 ************************************ 00:05:27.029 END TEST spdkcli_tcp 00:05:27.029 ************************************ 00:05:27.029 08:51:05 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.029 08:51:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.029 08:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.029 08:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:27.029 ************************************ 00:05:27.029 START TEST dpdk_mem_utility 00:05:27.029 ************************************ 00:05:27.029 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:27.289 * Looking for test storage... 00:05:27.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:27.289 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.289 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3638632 00:05:27.289 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.289 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3638632 00:05:27.289 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3638632 ']' 00:05:27.289 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.289 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.289 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.289 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.289 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.289 [2024-07-24 08:51:05.213593] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:27.289 [2024-07-24 08:51:05.213672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638632 ] 00:05:27.289 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.289 [2024-07-24 08:51:05.245732] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:27.289 [2024-07-24 08:51:05.271900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.289 [2024-07-24 08:51:05.356473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.549 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.549 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:27.549 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:27.549 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:27.549 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.549 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:27.549 { 00:05:27.549 "filename": "/tmp/spdk_mem_dump.txt" 00:05:27.549 } 00:05:27.549 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.549 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:27.808 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:27.808 1 heaps totaling size 814.000000 MiB 00:05:27.808 size: 814.000000 MiB heap id: 0 00:05:27.808 end heaps---------- 00:05:27.808 8 mempools totaling size 598.116089 MiB 00:05:27.809 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:27.809 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:27.809 size: 84.521057 MiB name: bdev_io_3638632 00:05:27.809 size: 51.011292 MiB name: evtpool_3638632 00:05:27.809 size: 50.003479 MiB name: msgpool_3638632 00:05:27.809 size: 21.763794 MiB name: PDU_Pool 00:05:27.809 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:27.809 size: 0.026123 MiB name: Session_Pool 00:05:27.809 end mempools------- 00:05:27.809 6 memzones totaling size 4.142822 MiB 00:05:27.809 size: 1.000366 MiB name: RG_ring_0_3638632 00:05:27.809 size: 1.000366 MiB name: RG_ring_1_3638632 00:05:27.809 size: 1.000366 MiB name: RG_ring_4_3638632 00:05:27.809 size: 1.000366 MiB name: RG_ring_5_3638632 00:05:27.809 size: 0.125366 MiB name: RG_ring_2_3638632 00:05:27.809 size: 0.015991 MiB name: RG_ring_3_3638632 00:05:27.809 end memzones------- 00:05:27.809 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:27.809 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:27.809 list of free elements. size: 12.519348 MiB 00:05:27.809 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:27.809 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:27.809 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:27.809 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:27.809 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:27.809 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:27.809 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:27.809 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:27.809 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:27.809 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:27.809 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:27.809 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:27.809 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:27.809 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:27.809 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:27.809 list of standard malloc elements. size: 199.218079 MiB 00:05:27.809 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:27.809 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:27.809 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:27.809 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:27.809 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:27.809 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:27.809 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:27.809 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:27.809 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:27.809 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:27.809 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:27.809 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:27.809 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:27.809 list of memzone associated elements. size: 602.262573 MiB 00:05:27.809 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:27.809 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:27.809 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:27.809 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:27.809 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:27.809 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3638632_0 00:05:27.809 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:27.809 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3638632_0 00:05:27.809 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:27.809 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3638632_0 00:05:27.809 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:27.809 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:27.809 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:27.809 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:27.809 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:27.809 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3638632 00:05:27.809 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:27.809 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3638632 00:05:27.809 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:27.809 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3638632 00:05:27.809 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:27.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:27.809 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:27.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:27.809 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:27.809 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:27.809 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:27.809 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:27.809 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:27.809 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3638632 00:05:27.809 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:27.809 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3638632 00:05:27.809 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:27.809 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3638632 00:05:27.809 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:27.809 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3638632 00:05:27.809 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:27.809 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3638632 00:05:27.809 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:27.809 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:27.809 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:27.809 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:27.809 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:27.809 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:27.809 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:27.809 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3638632 00:05:27.809 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:27.809 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:27.809 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:27.809 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:27.809 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:27.809 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3638632 00:05:27.809 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:27.809 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:27.809 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:27.809 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3638632 00:05:27.809 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:27.809 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3638632 00:05:27.809 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:27.809 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:27.809 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:27.809 08:51:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3638632 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3638632 ']' 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3638632 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3638632 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.809 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.810 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3638632' 00:05:27.810 killing process with pid 3638632 00:05:27.810 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3638632 00:05:27.810 08:51:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3638632 00:05:28.069 00:05:28.069 real 0m1.055s 00:05:28.069 user 0m1.016s 00:05:28.069 sys 0m0.404s 00:05:28.069 08:51:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.069 08:51:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.069 ************************************ 00:05:28.069 END TEST dpdk_mem_utility 00:05:28.069 ************************************ 00:05:28.328 08:51:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.328 08:51:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.328 08:51:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.328 08:51:06 -- common/autotest_common.sh@10 -- # set +x 00:05:28.328 ************************************ 00:05:28.328 START TEST event 00:05:28.328 ************************************ 00:05:28.328 08:51:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:28.328 * Looking for test storage... 00:05:28.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:28.328 08:51:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:28.328 08:51:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:28.328 08:51:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.328 08:51:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:28.328 08:51:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.328 08:51:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.328 ************************************ 00:05:28.328 START TEST event_perf 00:05:28.328 ************************************ 00:05:28.328 08:51:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:28.328 Running I/O for 1 seconds...[2024-07-24 08:51:06.298624] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:28.328 [2024-07-24 08:51:06.298687] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3638821 ] 00:05:28.328 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.328 [2024-07-24 08:51:06.330302] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:28.328 [2024-07-24 08:51:06.360410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:28.586 [2024-07-24 08:51:06.456437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.586 [2024-07-24 08:51:06.456490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:28.586 [2024-07-24 08:51:06.456603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:28.586 [2024-07-24 08:51:06.456606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.526 Running I/O for 1 seconds... 00:05:29.526 lcore 0: 232968 00:05:29.526 lcore 1: 232966 00:05:29.526 lcore 2: 232966 00:05:29.526 lcore 3: 232966 00:05:29.526 done. 00:05:29.526 00:05:29.526 real 0m1.251s 00:05:29.526 user 0m4.162s 00:05:29.526 sys 0m0.082s 00:05:29.526 08:51:07 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.526 08:51:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.526 ************************************ 00:05:29.526 END TEST event_perf 00:05:29.526 ************************************ 00:05:29.526 08:51:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.526 08:51:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:29.526 08:51:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.526 08:51:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.526 ************************************ 00:05:29.526 START TEST event_reactor 00:05:29.526 ************************************ 00:05:29.526 08:51:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:29.526 [2024-07-24 08:51:07.595271] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:29.526 [2024-07-24 08:51:07.595331] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639138 ] 00:05:29.526 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.526 [2024-07-24 08:51:07.631006] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:29.784 [2024-07-24 08:51:07.660917] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.784 [2024-07-24 08:51:07.754543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.720 test_start 00:05:30.720 oneshot 00:05:30.720 tick 100 00:05:30.720 tick 100 00:05:30.720 tick 250 00:05:30.720 tick 100 00:05:30.720 tick 100 00:05:30.720 tick 250 00:05:30.720 tick 100 00:05:30.720 tick 500 00:05:30.720 tick 100 00:05:30.720 tick 100 00:05:30.720 tick 250 00:05:30.720 tick 100 00:05:30.720 tick 100 00:05:30.720 test_end 00:05:30.720 00:05:30.720 real 0m1.250s 00:05:30.720 user 0m1.159s 00:05:30.720 sys 0m0.086s 00:05:30.720 08:51:08 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.720 08:51:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:30.720 ************************************ 00:05:30.720 END TEST event_reactor 00:05:30.720 ************************************ 00:05:30.979 08:51:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.979 08:51:08 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:30.979 08:51:08 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.979 08:51:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.979 ************************************ 00:05:30.979 START TEST event_reactor_perf 00:05:30.979 ************************************ 00:05:30.979 08:51:08 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:30.979 [2024-07-24 08:51:08.892831] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:30.979 [2024-07-24 08:51:08.892896] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639570 ] 00:05:30.979 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.979 [2024-07-24 08:51:08.928370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:30.979 [2024-07-24 08:51:08.958360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.979 [2024-07-24 08:51:09.052475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.357 test_start 00:05:32.357 test_end 00:05:32.357 Performance: 357417 events per second 00:05:32.357 00:05:32.357 real 0m1.253s 00:05:32.357 user 0m1.160s 00:05:32.357 sys 0m0.087s 00:05:32.357 08:51:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.357 08:51:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.357 ************************************ 00:05:32.357 END TEST event_reactor_perf 00:05:32.357 ************************************ 00:05:32.357 08:51:10 event -- event/event.sh@49 -- # uname -s 00:05:32.357 08:51:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:32.357 08:51:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.357 08:51:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.357 08:51:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.357 08:51:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.357 ************************************ 00:05:32.357 START TEST event_scheduler 00:05:32.357 ************************************ 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:32.357 * Looking for test storage... 00:05:32.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:32.357 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:32.357 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3639821 00:05:32.357 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:32.357 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:32.357 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3639821 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3639821 ']' 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.357 08:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.357 [2024-07-24 08:51:10.280659] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:32.357 [2024-07-24 08:51:10.280724] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3639821 ] 00:05:32.357 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.357 [2024-07-24 08:51:10.317030] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:32.357 [2024-07-24 08:51:10.344731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.357 [2024-07-24 08:51:10.434273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.357 [2024-07-24 08:51:10.434325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.357 [2024-07-24 08:51:10.434322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.357 [2024-07-24 08:51:10.434298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:32.618 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 [2024-07-24 08:51:10.511211] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:32.618 [2024-07-24 08:51:10.511239] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.618 [2024-07-24 08:51:10.511257] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.618 [2024-07-24 08:51:10.511269] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.618 [2024-07-24 08:51:10.511280] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 [2024-07-24 08:51:10.606659] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 ************************************ 00:05:32.618 START TEST scheduler_create_thread 00:05:32.618 ************************************ 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 2 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 3 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 4 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 5 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 6 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 7 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 8 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 9 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.618 10 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.618 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.619 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.879 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:32.879 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:32.879 08:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:32.879 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:32.879 08:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.138 08:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:33.138 00:05:33.138 real 0m0.591s 00:05:33.138 user 0m0.008s 00:05:33.138 sys 0m0.005s 00:05:33.138 08:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.138 08:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.138 ************************************ 00:05:33.138 END TEST scheduler_create_thread 00:05:33.138 ************************************ 00:05:33.138 08:51:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:33.138 08:51:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3639821 00:05:33.138 08:51:11 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3639821 ']' 00:05:33.138 08:51:11 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3639821 00:05:33.138 08:51:11 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:33.138 08:51:11 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:33.138 08:51:11 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3639821 00:05:33.395 08:51:11 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:33.395 08:51:11 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:33.395 08:51:11 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3639821' 00:05:33.395 killing process with pid 3639821 00:05:33.395 08:51:11 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3639821 00:05:33.395 08:51:11 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3639821 00:05:33.653 [2024-07-24 08:51:11.706678] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:33.911 00:05:33.911 real 0m1.736s 00:05:33.911 user 0m2.266s 00:05:33.911 sys 0m0.339s 00:05:33.911 08:51:11 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.911 08:51:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:33.911 ************************************ 00:05:33.911 END TEST event_scheduler 00:05:33.911 ************************************ 00:05:33.911 08:51:11 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:33.911 08:51:11 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:33.911 08:51:11 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:33.911 08:51:11 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.911 08:51:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.911 ************************************ 00:05:33.911 START TEST app_repeat 00:05:33.911 ************************************ 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3640132 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3640132' 00:05:33.911 Process app_repeat pid: 3640132 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:33.911 spdk_app_start Round 0 00:05:33.911 08:51:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3640132 /var/tmp/spdk-nbd.sock 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3640132 ']' 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.911 08:51:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.911 [2024-07-24 08:51:11.992013] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:33.911 [2024-07-24 08:51:11.992075] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3640132 ] 00:05:33.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.911 [2024-07-24 08:51:12.026892] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:34.168 [2024-07-24 08:51:12.058978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.168 [2024-07-24 08:51:12.151014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.168 [2024-07-24 08:51:12.151019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.168 08:51:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.168 08:51:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:34.168 08:51:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.426 Malloc0 00:05:34.426 08:51:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:34.683 Malloc1 00:05:34.683 08:51:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.683 08:51:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.940 /dev/nbd0 00:05:34.940 08:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:35.199 08:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.199 1+0 records in 00:05:35.199 1+0 records out 00:05:35.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152037 s, 26.9 MB/s 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.199 08:51:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.199 08:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.199 08:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.199 08:51:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:35.458 /dev/nbd1 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:35.458 1+0 records in 00:05:35.458 1+0 records out 00:05:35.458 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193564 s, 21.2 MB/s 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:35.458 08:51:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.458 08:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.727 08:51:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:35.728 { 00:05:35.728 "nbd_device": "/dev/nbd0", 00:05:35.728 "bdev_name": "Malloc0" 00:05:35.728 }, 00:05:35.728 { 00:05:35.728 "nbd_device": "/dev/nbd1", 00:05:35.728 "bdev_name": "Malloc1" 00:05:35.728 } 00:05:35.728 ]' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:35.728 { 00:05:35.728 "nbd_device": "/dev/nbd0", 00:05:35.728 "bdev_name": "Malloc0" 00:05:35.728 }, 00:05:35.728 { 00:05:35.728 "nbd_device": "/dev/nbd1", 00:05:35.728 "bdev_name": "Malloc1" 00:05:35.728 } 00:05:35.728 ]' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:35.728 /dev/nbd1' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:35.728 /dev/nbd1' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:35.728 256+0 records in 00:05:35.728 256+0 records out 00:05:35.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491329 s, 213 MB/s 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:35.728 256+0 records in 00:05:35.728 256+0 records out 00:05:35.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235931 s, 44.4 MB/s 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:35.728 256+0 records in 00:05:35.728 256+0 records out 00:05:35.728 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0226519 s, 46.3 MB/s 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.728 08:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:35.989 08:51:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.272 08:51:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:36.542 08:51:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:36.542 08:51:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:36.799 08:51:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:37.057 [2024-07-24 08:51:15.027273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.057 [2024-07-24 08:51:15.116868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.057 [2024-07-24 08:51:15.116883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.315 [2024-07-24 08:51:15.176241] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:37.315 [2024-07-24 08:51:15.176313] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:39.852 08:51:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.852 08:51:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:39.852 spdk_app_start Round 1 00:05:39.852 08:51:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3640132 /var/tmp/spdk-nbd.sock 00:05:39.852 08:51:17 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3640132 ']' 00:05:39.852 08:51:17 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.852 08:51:17 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.852 08:51:17 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.852 08:51:17 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.852 08:51:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:40.110 08:51:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.110 08:51:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:40.110 08:51:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.368 Malloc0 00:05:40.368 08:51:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.626 Malloc1 00:05:40.626 08:51:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.626 08:51:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.884 /dev/nbd0 00:05:40.884 08:51:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.884 08:51:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.884 1+0 records in 00:05:40.884 1+0 records out 00:05:40.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206075 s, 19.9 MB/s 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:40.884 08:51:18 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:40.884 08:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.884 08:51:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.884 08:51:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:41.143 /dev/nbd1 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:41.143 1+0 records in 00:05:41.143 1+0 records out 00:05:41.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203562 s, 20.1 MB/s 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:41.143 08:51:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.143 08:51:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:41.401 { 00:05:41.401 "nbd_device": "/dev/nbd0", 00:05:41.401 "bdev_name": "Malloc0" 00:05:41.401 }, 00:05:41.401 { 00:05:41.401 "nbd_device": "/dev/nbd1", 00:05:41.401 "bdev_name": "Malloc1" 00:05:41.401 } 00:05:41.401 ]' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:41.401 { 00:05:41.401 "nbd_device": "/dev/nbd0", 00:05:41.401 "bdev_name": "Malloc0" 00:05:41.401 }, 00:05:41.401 { 00:05:41.401 "nbd_device": "/dev/nbd1", 00:05:41.401 "bdev_name": "Malloc1" 00:05:41.401 } 00:05:41.401 ]' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:41.401 /dev/nbd1' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:41.401 /dev/nbd1' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.401 256+0 records in 00:05:41.401 256+0 records out 00:05:41.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00541148 s, 194 MB/s 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.401 08:51:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.401 256+0 records in 00:05:41.401 256+0 records out 00:05:41.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206139 s, 50.9 MB/s 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.402 256+0 records in 00:05:41.402 256+0 records out 00:05:41.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247934 s, 42.3 MB/s 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.402 08:51:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.968 08:51:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.968 08:51:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:42.226 08:51:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:42.226 08:51:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:42.226 08:51:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:42.226 08:51:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:42.486 08:51:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:42.486 08:51:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.746 08:51:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.746 [2024-07-24 08:51:20.843840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.004 [2024-07-24 08:51:20.935767] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.004 [2024-07-24 08:51:20.935771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.004 [2024-07-24 08:51:21.000047] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:43.004 [2024-07-24 08:51:21.000127] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.539 08:51:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.539 08:51:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:45.539 spdk_app_start Round 2 00:05:45.539 08:51:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3640132 /var/tmp/spdk-nbd.sock 00:05:45.539 08:51:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3640132 ']' 00:05:45.539 08:51:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.540 08:51:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.540 08:51:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.540 08:51:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.540 08:51:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.797 08:51:23 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.797 08:51:23 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:45.797 08:51:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.054 Malloc0 00:05:46.054 08:51:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:46.312 Malloc1 00:05:46.312 08:51:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.312 08:51:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.569 /dev/nbd0 00:05:46.569 08:51:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.569 08:51:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.569 1+0 records in 00:05:46.569 1+0 records out 00:05:46.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200195 s, 20.5 MB/s 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.569 08:51:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.569 08:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.569 08:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.569 08:51:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.828 /dev/nbd1 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.828 1+0 records in 00:05:46.828 1+0 records out 00:05:46.828 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210333 s, 19.5 MB/s 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:46.828 08:51:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.828 08:51:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.085 08:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:47.085 { 00:05:47.085 "nbd_device": "/dev/nbd0", 00:05:47.085 "bdev_name": "Malloc0" 00:05:47.085 }, 00:05:47.085 { 00:05:47.085 "nbd_device": "/dev/nbd1", 00:05:47.085 "bdev_name": "Malloc1" 00:05:47.085 } 00:05:47.085 ]' 00:05:47.085 08:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:47.085 { 00:05:47.085 "nbd_device": "/dev/nbd0", 00:05:47.085 "bdev_name": "Malloc0" 00:05:47.085 }, 00:05:47.085 { 00:05:47.085 "nbd_device": "/dev/nbd1", 00:05:47.085 "bdev_name": "Malloc1" 00:05:47.085 } 00:05:47.085 ]' 00:05:47.085 08:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.342 08:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:47.342 /dev/nbd1' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:47.343 /dev/nbd1' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:47.343 256+0 records in 00:05:47.343 256+0 records out 00:05:47.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00509063 s, 206 MB/s 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:47.343 256+0 records in 00:05:47.343 256+0 records out 00:05:47.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0246161 s, 42.6 MB/s 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:47.343 256+0 records in 00:05:47.343 256+0 records out 00:05:47.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0231795 s, 45.2 MB/s 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.343 08:51:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.600 08:51:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.601 08:51:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:47.601 08:51:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:47.858 08:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:48.116 08:51:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:48.116 08:51:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:48.375 08:51:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.636 [2024-07-24 08:51:26.652602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.636 [2024-07-24 08:51:26.743357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.636 [2024-07-24 08:51:26.743362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.895 [2024-07-24 08:51:26.806808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.895 [2024-07-24 08:51:26.806876] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:51.429 08:51:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3640132 /var/tmp/spdk-nbd.sock 00:05:51.429 08:51:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3640132 ']' 00:05:51.429 08:51:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:51.429 08:51:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.429 08:51:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:51.429 08:51:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.429 08:51:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:51.688 08:51:29 event.app_repeat -- event/event.sh@39 -- # killprocess 3640132 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3640132 ']' 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3640132 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3640132 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3640132' 00:05:51.688 killing process with pid 3640132 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3640132 00:05:51.688 08:51:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3640132 00:05:51.947 spdk_app_start is called in Round 0. 00:05:51.947 Shutdown signal received, stop current app iteration 00:05:51.947 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 reinitialization... 00:05:51.947 spdk_app_start is called in Round 1. 00:05:51.947 Shutdown signal received, stop current app iteration 00:05:51.947 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 reinitialization... 00:05:51.947 spdk_app_start is called in Round 2. 00:05:51.947 Shutdown signal received, stop current app iteration 00:05:51.947 Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 reinitialization... 00:05:51.947 spdk_app_start is called in Round 3. 00:05:51.947 Shutdown signal received, stop current app iteration 00:05:51.947 08:51:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:51.947 08:51:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:51.947 00:05:51.947 real 0m17.923s 00:05:51.947 user 0m39.023s 00:05:51.947 sys 0m3.178s 00:05:51.947 08:51:29 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.947 08:51:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.947 ************************************ 00:05:51.947 END TEST app_repeat 00:05:51.947 ************************************ 00:05:51.947 08:51:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.947 08:51:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.947 08:51:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.947 08:51:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.947 08:51:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.947 ************************************ 00:05:51.947 START TEST cpu_locks 00:05:51.947 ************************************ 00:05:51.947 08:51:29 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:51.947 * Looking for test storage... 00:05:51.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.947 08:51:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.947 08:51:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.947 08:51:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.947 08:51:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.947 08:51:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.947 08:51:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.947 08:51:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.947 ************************************ 00:05:51.947 START TEST default_locks 00:05:51.947 ************************************ 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3642502 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3642502 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3642502 ']' 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:51.947 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.206 [2024-07-24 08:51:30.075298] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:52.206 [2024-07-24 08:51:30.075373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642502 ] 00:05:52.207 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.207 [2024-07-24 08:51:30.106510] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:52.207 [2024-07-24 08:51:30.132480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.207 [2024-07-24 08:51:30.216979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.465 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.465 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:52.465 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3642502 00:05:52.465 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3642502 00:05:52.465 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.724 lslocks: write error 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3642502 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3642502 ']' 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3642502 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3642502 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3642502' 00:05:52.724 killing process with pid 3642502 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3642502 00:05:52.724 08:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3642502 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3642502 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3642502 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3642502 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3642502 ']' 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3642502) - No such process 00:05:53.294 ERROR: process (pid: 3642502) is no longer running 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.294 00:05:53.294 real 0m1.213s 00:05:53.294 user 0m1.150s 00:05:53.294 sys 0m0.521s 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.294 08:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.294 ************************************ 00:05:53.294 END TEST default_locks 00:05:53.294 ************************************ 00:05:53.294 08:51:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:53.294 08:51:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.294 08:51:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.294 08:51:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:53.294 ************************************ 00:05:53.294 START TEST default_locks_via_rpc 00:05:53.294 ************************************ 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3642664 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3642664 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3642664 ']' 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.294 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.294 [2024-07-24 08:51:31.337831] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:53.294 [2024-07-24 08:51:31.337914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642664 ] 00:05:53.294 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.294 [2024-07-24 08:51:31.370092] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:53.294 [2024-07-24 08:51:31.396277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.552 [2024-07-24 08:51:31.483962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.811 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.811 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:53.811 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3642664 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3642664 00:05:53.812 08:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3642664 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3642664 ']' 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3642664 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3642664 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3642664' 00:05:54.072 killing process with pid 3642664 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3642664 00:05:54.072 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3642664 00:05:54.641 00:05:54.641 real 0m1.233s 00:05:54.641 user 0m1.186s 00:05:54.641 sys 0m0.507s 00:05:54.641 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.641 08:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 ************************************ 00:05:54.641 END TEST default_locks_via_rpc 00:05:54.641 ************************************ 00:05:54.641 08:51:32 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:54.641 08:51:32 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.641 08:51:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.641 08:51:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 ************************************ 00:05:54.641 START TEST non_locking_app_on_locked_coremask 00:05:54.641 ************************************ 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3642826 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3642826 /var/tmp/spdk.sock 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3642826 ']' 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.641 08:51:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.641 [2024-07-24 08:51:32.615595] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:54.641 [2024-07-24 08:51:32.615671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642826 ] 00:05:54.641 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.641 [2024-07-24 08:51:32.647558] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:54.641 [2024-07-24 08:51:32.673965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.899 [2024-07-24 08:51:32.762277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3642845 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3642845 /var/tmp/spdk2.sock 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3642845 ']' 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:54.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.899 08:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.159 [2024-07-24 08:51:33.062052] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:55.159 [2024-07-24 08:51:33.062145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3642845 ] 00:05:55.159 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.159 [2024-07-24 08:51:33.099803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:55.160 [2024-07-24 08:51:33.158326] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:55.160 [2024-07-24 08:51:33.158358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.427 [2024-07-24 08:51:33.347283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.025 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.025 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:56.025 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3642826 00:05:56.025 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3642826 00:05:56.025 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.592 lslocks: write error 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3642826 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3642826 ']' 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3642826 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3642826 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3642826' 00:05:56.592 killing process with pid 3642826 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3642826 00:05:56.592 08:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3642826 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3642845 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3642845 ']' 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3642845 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3642845 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3642845' 00:05:57.530 killing process with pid 3642845 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3642845 00:05:57.530 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3642845 00:05:58.099 00:05:58.099 real 0m3.358s 00:05:58.099 user 0m3.518s 00:05:58.099 sys 0m1.031s 00:05:58.099 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.099 08:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.099 ************************************ 00:05:58.099 END TEST non_locking_app_on_locked_coremask 00:05:58.099 ************************************ 00:05:58.099 08:51:35 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:58.099 08:51:35 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.099 08:51:35 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.099 08:51:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.099 ************************************ 00:05:58.099 START TEST locking_app_on_unlocked_coremask 00:05:58.099 ************************************ 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3643266 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3643266 /var/tmp/spdk.sock 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3643266 ']' 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.099 08:51:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.099 [2024-07-24 08:51:36.025684] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:58.099 [2024-07-24 08:51:36.025787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643266 ] 00:05:58.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.099 [2024-07-24 08:51:36.057878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.099 [2024-07-24 08:51:36.089161] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:58.099 [2024-07-24 08:51:36.089189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.099 [2024-07-24 08:51:36.180956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3643281 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3643281 /var/tmp/spdk2.sock 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3643281 ']' 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.357 08:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.615 [2024-07-24 08:51:36.496201] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:05:58.615 [2024-07-24 08:51:36.496312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643281 ] 00:05:58.615 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.615 [2024-07-24 08:51:36.531969] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:05:58.615 [2024-07-24 08:51:36.596238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.875 [2024-07-24 08:51:36.781292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.441 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.441 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:59.441 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3643281 00:05:59.441 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3643281 00:05:59.441 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.009 lslocks: write error 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3643266 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3643266 ']' 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3643266 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3643266 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3643266' 00:06:00.009 killing process with pid 3643266 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3643266 00:06:00.009 08:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3643266 00:06:00.576 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3643281 00:06:00.576 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3643281 ']' 00:06:00.576 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3643281 00:06:00.576 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:00.576 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:00.576 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3643281 00:06:00.835 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:00.835 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:00.835 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3643281' 00:06:00.835 killing process with pid 3643281 00:06:00.835 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3643281 00:06:00.835 08:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3643281 00:06:01.094 00:06:01.094 real 0m3.146s 00:06:01.094 user 0m3.277s 00:06:01.094 sys 0m1.053s 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.094 ************************************ 00:06:01.094 END TEST locking_app_on_unlocked_coremask 00:06:01.094 ************************************ 00:06:01.094 08:51:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:01.094 08:51:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.094 08:51:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.094 08:51:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.094 ************************************ 00:06:01.094 START TEST locking_app_on_locked_coremask 00:06:01.094 ************************************ 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3643702 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3643702 /var/tmp/spdk.sock 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3643702 ']' 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.094 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.352 [2024-07-24 08:51:39.225129] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:01.352 [2024-07-24 08:51:39.225234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643702 ] 00:06:01.352 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.352 [2024-07-24 08:51:39.256179] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.352 [2024-07-24 08:51:39.287439] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.352 [2024-07-24 08:51:39.376347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.610 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.610 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3643715 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3643715 /var/tmp/spdk2.sock 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3643715 /var/tmp/spdk2.sock 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3643715 /var/tmp/spdk2.sock 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3643715 ']' 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.611 08:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.611 [2024-07-24 08:51:39.681376] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:01.611 [2024-07-24 08:51:39.681488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643715 ] 00:06:01.611 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.611 [2024-07-24 08:51:39.715873] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:01.871 [2024-07-24 08:51:39.779969] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3643702 has claimed it. 00:06:01.871 [2024-07-24 08:51:39.780023] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:02.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3643715) - No such process 00:06:02.440 ERROR: process (pid: 3643715) is no longer running 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3643702 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3643702 00:06:02.440 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.700 lslocks: write error 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3643702 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3643702 ']' 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3643702 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3643702 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3643702' 00:06:02.700 killing process with pid 3643702 00:06:02.700 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3643702 00:06:02.701 08:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3643702 00:06:03.269 00:06:03.269 real 0m1.970s 00:06:03.269 user 0m2.137s 00:06:03.269 sys 0m0.620s 00:06:03.269 08:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.269 08:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.269 ************************************ 00:06:03.269 END TEST locking_app_on_locked_coremask 00:06:03.269 ************************************ 00:06:03.269 08:51:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:03.270 08:51:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:03.270 08:51:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.270 08:51:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.270 ************************************ 00:06:03.270 START TEST locking_overlapped_coremask 00:06:03.270 ************************************ 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3643998 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3643998 /var/tmp/spdk.sock 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3643998 ']' 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.270 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.270 [2024-07-24 08:51:41.247971] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:03.270 [2024-07-24 08:51:41.248078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3643998 ] 00:06:03.270 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.270 [2024-07-24 08:51:41.278991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.270 [2024-07-24 08:51:41.310570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.529 [2024-07-24 08:51:41.400285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.529 [2024-07-24 08:51:41.400352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:03.529 [2024-07-24 08:51:41.400355] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3644003 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3644003 /var/tmp/spdk2.sock 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3644003 /var/tmp/spdk2.sock 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3644003 /var/tmp/spdk2.sock 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3644003 ']' 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:03.788 08:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.788 [2024-07-24 08:51:41.703669] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:03.788 [2024-07-24 08:51:41.703766] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644003 ] 00:06:03.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.788 [2024-07-24 08:51:41.737188] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.788 [2024-07-24 08:51:41.792738] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3643998 has claimed it. 00:06:03.788 [2024-07-24 08:51:41.792787] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3644003) - No such process 00:06:04.355 ERROR: process (pid: 3644003) is no longer running 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3643998 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3643998 ']' 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3643998 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3643998 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3643998' 00:06:04.355 killing process with pid 3643998 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3643998 00:06:04.355 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3643998 00:06:04.923 00:06:04.923 real 0m1.661s 00:06:04.923 user 0m4.481s 00:06:04.923 sys 0m0.468s 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 ************************************ 00:06:04.923 END TEST locking_overlapped_coremask 00:06:04.923 ************************************ 00:06:04.923 08:51:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:04.923 08:51:42 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.923 08:51:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.923 08:51:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 ************************************ 00:06:04.923 START TEST locking_overlapped_coremask_via_rpc 00:06:04.923 ************************************ 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3644179 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3644179 /var/tmp/spdk.sock 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3644179 ']' 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.923 08:51:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:04.923 [2024-07-24 08:51:42.954498] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:04.923 [2024-07-24 08:51:42.954561] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644179 ] 00:06:04.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.923 [2024-07-24 08:51:42.987164] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:04.923 [2024-07-24 08:51:43.012870] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.923 [2024-07-24 08:51:43.012895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.181 [2024-07-24 08:51:43.105439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.181 [2024-07-24 08:51:43.105505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.181 [2024-07-24 08:51:43.105508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3644303 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3644303 /var/tmp/spdk2.sock 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3644303 ']' 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:05.439 08:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.439 [2024-07-24 08:51:43.399968] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:05.439 [2024-07-24 08:51:43.400067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644303 ] 00:06:05.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.439 [2024-07-24 08:51:43.434161] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:05.439 [2024-07-24 08:51:43.488127] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:05.439 [2024-07-24 08:51:43.488154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:05.697 [2024-07-24 08:51:43.664563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.697 [2024-07-24 08:51:43.664629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.697 [2024-07-24 08:51:43.664632] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.262 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.263 [2024-07-24 08:51:44.330206] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3644179 has claimed it. 00:06:06.263 request: 00:06:06.263 { 00:06:06.263 "method": "framework_enable_cpumask_locks", 00:06:06.263 "req_id": 1 00:06:06.263 } 00:06:06.263 Got JSON-RPC error response 00:06:06.263 response: 00:06:06.263 { 00:06:06.263 "code": -32603, 00:06:06.263 "message": "Failed to claim CPU core: 2" 00:06:06.263 } 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3644179 /var/tmp/spdk.sock 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3644179 ']' 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.263 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3644303 /var/tmp/spdk2.sock 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3644303 ']' 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:06.521 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:06.779 00:06:06.779 real 0m1.942s 00:06:06.779 user 0m1.012s 00:06:06.779 sys 0m0.180s 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:06.779 08:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:06.779 ************************************ 00:06:06.779 END TEST locking_overlapped_coremask_via_rpc 00:06:06.779 ************************************ 00:06:06.779 08:51:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:06.779 08:51:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3644179 ]] 00:06:06.779 08:51:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3644179 00:06:06.779 08:51:44 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3644179 ']' 00:06:06.779 08:51:44 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3644179 00:06:06.779 08:51:44 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:06.779 08:51:44 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:06.779 08:51:44 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3644179 00:06:07.038 08:51:44 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:07.038 08:51:44 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:07.038 08:51:44 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3644179' 00:06:07.038 killing process with pid 3644179 00:06:07.038 08:51:44 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3644179 00:06:07.038 08:51:44 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3644179 00:06:07.298 08:51:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3644303 ]] 00:06:07.298 08:51:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3644303 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3644303 ']' 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3644303 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3644303 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3644303' 00:06:07.298 killing process with pid 3644303 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3644303 00:06:07.298 08:51:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3644303 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3644179 ]] 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3644179 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3644179 ']' 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3644179 00:06:07.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3644179) - No such process 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3644179 is not found' 00:06:07.867 Process with pid 3644179 is not found 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3644303 ]] 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3644303 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3644303 ']' 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3644303 00:06:07.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3644303) - No such process 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3644303 is not found' 00:06:07.867 Process with pid 3644303 is not found 00:06:07.867 08:51:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:07.867 00:06:07.867 real 0m15.804s 00:06:07.867 user 0m27.453s 00:06:07.867 sys 0m5.308s 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.867 08:51:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.867 ************************************ 00:06:07.867 END TEST cpu_locks 00:06:07.867 ************************************ 00:06:07.867 00:06:07.867 real 0m39.559s 00:06:07.867 user 1m15.361s 00:06:07.867 sys 0m9.308s 00:06:07.867 08:51:45 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.867 08:51:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.867 ************************************ 00:06:07.867 END TEST event 00:06:07.867 ************************************ 00:06:07.867 08:51:45 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.867 08:51:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.867 08:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.867 08:51:45 -- common/autotest_common.sh@10 -- # set +x 00:06:07.867 ************************************ 00:06:07.867 START TEST thread 00:06:07.867 ************************************ 00:06:07.867 08:51:45 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:07.867 * Looking for test storage... 00:06:07.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:07.867 08:51:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.867 08:51:45 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:07.867 08:51:45 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.867 08:51:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:07.867 ************************************ 00:06:07.867 START TEST thread_poller_perf 00:06:07.867 ************************************ 00:06:07.867 08:51:45 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:07.867 [2024-07-24 08:51:45.909207] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:07.867 [2024-07-24 08:51:45.909273] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644672 ] 00:06:07.867 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.867 [2024-07-24 08:51:45.941237] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:07.867 [2024-07-24 08:51:45.970820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.127 [2024-07-24 08:51:46.062015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.127 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:09.067 ====================================== 00:06:09.067 busy:2714176432 (cyc) 00:06:09.067 total_run_count: 292000 00:06:09.067 tsc_hz: 2700000000 (cyc) 00:06:09.067 ====================================== 00:06:09.067 poller_cost: 9295 (cyc), 3442 (nsec) 00:06:09.067 00:06:09.067 real 0m1.256s 00:06:09.067 user 0m1.171s 00:06:09.067 sys 0m0.080s 00:06:09.067 08:51:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.067 08:51:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:09.067 ************************************ 00:06:09.067 END TEST thread_poller_perf 00:06:09.067 ************************************ 00:06:09.067 08:51:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.067 08:51:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:09.067 08:51:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.067 08:51:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:09.326 ************************************ 00:06:09.326 START TEST thread_poller_perf 00:06:09.326 ************************************ 00:06:09.326 08:51:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:09.326 [2024-07-24 08:51:47.212432] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:09.326 [2024-07-24 08:51:47.212501] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3644825 ] 00:06:09.326 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.326 [2024-07-24 08:51:47.244983] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:09.326 [2024-07-24 08:51:47.274575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.326 [2024-07-24 08:51:47.368937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.326 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:10.708 ====================================== 00:06:10.708 busy:2703027790 (cyc) 00:06:10.708 total_run_count: 3879000 00:06:10.708 tsc_hz: 2700000000 (cyc) 00:06:10.708 ====================================== 00:06:10.708 poller_cost: 696 (cyc), 257 (nsec) 00:06:10.708 00:06:10.708 real 0m1.253s 00:06:10.708 user 0m1.160s 00:06:10.708 sys 0m0.088s 00:06:10.708 08:51:48 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.708 08:51:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:10.708 ************************************ 00:06:10.708 END TEST thread_poller_perf 00:06:10.708 ************************************ 00:06:10.708 08:51:48 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:10.708 00:06:10.708 real 0m2.658s 00:06:10.708 user 0m2.387s 00:06:10.708 sys 0m0.270s 00:06:10.708 08:51:48 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.708 08:51:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.708 ************************************ 00:06:10.709 END TEST thread 00:06:10.709 ************************************ 00:06:10.709 08:51:48 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:10.709 08:51:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.709 08:51:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.709 08:51:48 -- common/autotest_common.sh@10 -- # set +x 00:06:10.709 ************************************ 00:06:10.709 START TEST accel 00:06:10.709 ************************************ 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:10.709 * Looking for test storage... 00:06:10.709 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:10.709 08:51:48 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:10.709 08:51:48 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:10.709 08:51:48 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.709 08:51:48 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3645020 00:06:10.709 08:51:48 accel -- accel/accel.sh@63 -- # waitforlisten 3645020 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@829 -- # '[' -z 3645020 ']' 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.709 08:51:48 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.709 08:51:48 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.709 08:51:48 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.709 08:51:48 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.709 08:51:48 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.709 08:51:48 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.709 08:51:48 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.709 08:51:48 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.709 08:51:48 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:10.709 08:51:48 accel -- accel/accel.sh@41 -- # jq -r . 00:06:10.709 [2024-07-24 08:51:48.620643] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:10.709 [2024-07-24 08:51:48.620723] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645020 ] 00:06:10.709 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.709 [2024-07-24 08:51:48.654850] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:10.709 [2024-07-24 08:51:48.680457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.709 [2024-07-24 08:51:48.766291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.968 08:51:49 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.968 08:51:49 accel -- common/autotest_common.sh@862 -- # return 0 00:06:10.968 08:51:49 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:10.968 08:51:49 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:10.968 08:51:49 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:10.968 08:51:49 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:10.968 08:51:49 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:10.968 08:51:49 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:10.968 08:51:49 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.968 08:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.968 08:51:49 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:10.968 08:51:49 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:10.968 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:10.968 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:10.968 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # IFS== 00:06:11.228 08:51:49 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:11.228 08:51:49 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:11.228 08:51:49 accel -- accel/accel.sh@75 -- # killprocess 3645020 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@948 -- # '[' -z 3645020 ']' 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@952 -- # kill -0 3645020 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@953 -- # uname 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3645020 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3645020' 00:06:11.228 killing process with pid 3645020 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@967 -- # kill 3645020 00:06:11.228 08:51:49 accel -- common/autotest_common.sh@972 -- # wait 3645020 00:06:11.487 08:51:49 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:11.487 08:51:49 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:11.487 08:51:49 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:11.487 08:51:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.487 08:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.487 08:51:49 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:11.487 08:51:49 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:11.487 08:51:49 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.487 08:51:49 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:11.746 08:51:49 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:11.746 08:51:49 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:11.746 08:51:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.746 08:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.746 ************************************ 00:06:11.746 START TEST accel_missing_filename 00:06:11.746 ************************************ 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:11.747 08:51:49 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:11.747 08:51:49 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:11.747 [2024-07-24 08:51:49.650811] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:11.747 [2024-07-24 08:51:49.650876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645186 ] 00:06:11.747 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.747 [2024-07-24 08:51:49.683973] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:11.747 [2024-07-24 08:51:49.713420] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.747 [2024-07-24 08:51:49.805482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.006 [2024-07-24 08:51:49.865459] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.006 [2024-07-24 08:51:49.951974] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:12.006 A filename is required. 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.006 00:06:12.006 real 0m0.402s 00:06:12.006 user 0m0.295s 00:06:12.006 sys 0m0.140s 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.006 08:51:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:12.006 ************************************ 00:06:12.006 END TEST accel_missing_filename 00:06:12.006 ************************************ 00:06:12.006 08:51:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.006 08:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:12.006 08:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.006 08:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.006 ************************************ 00:06:12.006 START TEST accel_compress_verify 00:06:12.006 ************************************ 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.006 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:12.006 08:51:50 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:12.006 [2024-07-24 08:51:50.103588] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:12.006 [2024-07-24 08:51:50.103655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645288 ] 00:06:12.265 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.265 [2024-07-24 08:51:50.136494] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.265 [2024-07-24 08:51:50.166854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.266 [2024-07-24 08:51:50.262203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.266 [2024-07-24 08:51:50.326549] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:12.524 [2024-07-24 08:51:50.412523] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:12.524 00:06:12.524 Compression does not support the verify option, aborting. 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.524 00:06:12.524 real 0m0.413s 00:06:12.524 user 0m0.291s 00:06:12.524 sys 0m0.156s 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.524 08:51:50 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:12.524 ************************************ 00:06:12.524 END TEST accel_compress_verify 00:06:12.524 ************************************ 00:06:12.524 08:51:50 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:12.524 08:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.524 08:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.525 08:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.525 ************************************ 00:06:12.525 START TEST accel_wrong_workload 00:06:12.525 ************************************ 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:12.525 08:51:50 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:12.525 Unsupported workload type: foobar 00:06:12.525 [2024-07-24 08:51:50.563546] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:12.525 accel_perf options: 00:06:12.525 [-h help message] 00:06:12.525 [-q queue depth per core] 00:06:12.525 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:12.525 [-T number of threads per core 00:06:12.525 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:12.525 [-t time in seconds] 00:06:12.525 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:12.525 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:12.525 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:12.525 [-l for compress/decompress workloads, name of uncompressed input file 00:06:12.525 [-S for crc32c workload, use this seed value (default 0) 00:06:12.525 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:12.525 [-f for fill workload, use this BYTE value (default 255) 00:06:12.525 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:12.525 [-y verify result if this switch is on] 00:06:12.525 [-a tasks to allocate per core (default: same value as -q)] 00:06:12.525 Can be used to spread operations across a wider range of memory. 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.525 00:06:12.525 real 0m0.023s 00:06:12.525 user 0m0.014s 00:06:12.525 sys 0m0.009s 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.525 08:51:50 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:12.525 ************************************ 00:06:12.525 END TEST accel_wrong_workload 00:06:12.525 ************************************ 00:06:12.525 Error: writing output failed: Broken pipe 00:06:12.525 08:51:50 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:12.525 08:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:12.525 08:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.525 08:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.525 ************************************ 00:06:12.525 START TEST accel_negative_buffers 00:06:12.525 ************************************ 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:12.525 08:51:50 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:12.525 -x option must be non-negative. 00:06:12.525 [2024-07-24 08:51:50.634072] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:12.525 accel_perf options: 00:06:12.525 [-h help message] 00:06:12.525 [-q queue depth per core] 00:06:12.525 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:12.525 [-T number of threads per core 00:06:12.525 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:12.525 [-t time in seconds] 00:06:12.525 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:12.525 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:12.525 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:12.525 [-l for compress/decompress workloads, name of uncompressed input file 00:06:12.525 [-S for crc32c workload, use this seed value (default 0) 00:06:12.525 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:12.525 [-f for fill workload, use this BYTE value (default 255) 00:06:12.525 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:12.525 [-y verify result if this switch is on] 00:06:12.525 [-a tasks to allocate per core (default: same value as -q)] 00:06:12.525 Can be used to spread operations across a wider range of memory. 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:12.525 00:06:12.525 real 0m0.024s 00:06:12.525 user 0m0.012s 00:06:12.525 sys 0m0.012s 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.525 08:51:50 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:12.525 ************************************ 00:06:12.525 END TEST accel_negative_buffers 00:06:12.525 ************************************ 00:06:12.801 Error: writing output failed: Broken pipe 00:06:12.801 08:51:50 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:12.801 08:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:12.801 08:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.801 08:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.801 ************************************ 00:06:12.801 START TEST accel_crc32c 00:06:12.801 ************************************ 00:06:12.801 08:51:50 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:12.801 08:51:50 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:12.801 [2024-07-24 08:51:50.699011] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:12.801 [2024-07-24 08:51:50.699081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645397 ] 00:06:12.801 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.802 [2024-07-24 08:51:50.731539] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:12.802 [2024-07-24 08:51:50.761249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.802 [2024-07-24 08:51:50.856788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.088 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:13.089 08:51:50 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:14.026 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:14.027 08:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:14.027 08:51:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.027 08:51:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:14.027 08:51:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.027 00:06:14.027 real 0m1.408s 00:06:14.027 user 0m1.269s 00:06:14.027 sys 0m0.142s 00:06:14.027 08:51:52 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.027 08:51:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:14.027 ************************************ 00:06:14.027 END TEST accel_crc32c 00:06:14.027 ************************************ 00:06:14.027 08:51:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:14.027 08:51:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:14.027 08:51:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.027 08:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.027 ************************************ 00:06:14.027 START TEST accel_crc32c_C2 00:06:14.027 ************************************ 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:14.027 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:14.286 [2024-07-24 08:51:52.146730] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:14.286 [2024-07-24 08:51:52.146794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645566 ] 00:06:14.286 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.286 [2024-07-24 08:51:52.179232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:14.286 [2024-07-24 08:51:52.208978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.286 [2024-07-24 08:51:52.306482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:14.286 08:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.667 00:06:15.667 real 0m1.409s 00:06:15.667 user 0m1.258s 00:06:15.667 sys 0m0.154s 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.667 08:51:53 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:15.667 ************************************ 00:06:15.667 END TEST accel_crc32c_C2 00:06:15.667 ************************************ 00:06:15.667 08:51:53 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:15.667 08:51:53 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:15.667 08:51:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.667 08:51:53 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.667 ************************************ 00:06:15.667 START TEST accel_copy 00:06:15.667 ************************************ 00:06:15.667 08:51:53 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.667 08:51:53 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:15.668 08:51:53 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:15.668 [2024-07-24 08:51:53.597866] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:15.668 [2024-07-24 08:51:53.597929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645834 ] 00:06:15.668 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.668 [2024-07-24 08:51:53.629248] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.668 [2024-07-24 08:51:53.659097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.668 [2024-07-24 08:51:53.754071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:15.927 08:51:53 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:17.306 08:51:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.307 08:51:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:17.307 08:51:54 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.307 00:06:17.307 real 0m1.414s 00:06:17.307 user 0m1.263s 00:06:17.307 sys 0m0.153s 00:06:17.307 08:51:54 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.307 08:51:54 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:17.307 ************************************ 00:06:17.307 END TEST accel_copy 00:06:17.307 ************************************ 00:06:17.307 08:51:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.307 08:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:17.307 08:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.307 08:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.307 ************************************ 00:06:17.307 START TEST accel_fill 00:06:17.307 ************************************ 00:06:17.307 08:51:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:17.307 [2024-07-24 08:51:55.065303] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:17.307 [2024-07-24 08:51:55.065367] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3645993 ] 00:06:17.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.307 [2024-07-24 08:51:55.096500] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.307 [2024-07-24 08:51:55.128091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.307 [2024-07-24 08:51:55.220806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:17.307 08:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:18.687 08:51:56 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.687 00:06:18.687 real 0m1.413s 00:06:18.687 user 0m1.262s 00:06:18.687 sys 0m0.153s 00:06:18.687 08:51:56 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.687 08:51:56 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:18.687 ************************************ 00:06:18.687 END TEST accel_fill 00:06:18.687 ************************************ 00:06:18.687 08:51:56 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:18.687 08:51:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:18.687 08:51:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.687 08:51:56 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.687 ************************************ 00:06:18.687 START TEST accel_copy_crc32c 00:06:18.687 ************************************ 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:18.687 [2024-07-24 08:51:56.524729] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:18.687 [2024-07-24 08:51:56.524789] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646144 ] 00:06:18.687 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.687 [2024-07-24 08:51:56.556965] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:18.687 [2024-07-24 08:51:56.586590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.687 [2024-07-24 08:51:56.679796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.687 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:18.688 08:51:56 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:20.066 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:20.067 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:20.067 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:20.067 08:51:57 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:20.067 00:06:20.067 real 0m1.396s 00:06:20.067 user 0m1.249s 00:06:20.067 sys 0m0.150s 00:06:20.067 08:51:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.067 08:51:57 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:20.067 ************************************ 00:06:20.067 END TEST accel_copy_crc32c 00:06:20.067 ************************************ 00:06:20.067 08:51:57 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:20.067 08:51:57 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:20.067 08:51:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.067 08:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:20.067 ************************************ 00:06:20.067 START TEST accel_copy_crc32c_C2 00:06:20.067 ************************************ 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:20.067 08:51:57 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:20.067 [2024-07-24 08:51:57.960950] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:20.067 [2024-07-24 08:51:57.961002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646307 ] 00:06:20.067 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.067 [2024-07-24 08:51:57.992531] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:20.067 [2024-07-24 08:51:58.022330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.067 [2024-07-24 08:51:58.121408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.327 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:20.328 08:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.264 00:06:21.264 real 0m1.415s 00:06:21.264 user 0m1.268s 00:06:21.264 sys 0m0.150s 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.264 08:51:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:21.264 ************************************ 00:06:21.264 END TEST accel_copy_crc32c_C2 00:06:21.264 ************************************ 00:06:21.524 08:51:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:21.524 08:51:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:21.524 08:51:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.524 08:51:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.524 ************************************ 00:06:21.524 START TEST accel_dualcast 00:06:21.524 ************************************ 00:06:21.524 08:51:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:21.524 08:51:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:21.525 08:51:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:21.525 [2024-07-24 08:51:59.423858] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:21.525 [2024-07-24 08:51:59.423921] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646579 ] 00:06:21.525 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.525 [2024-07-24 08:51:59.456606] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:21.525 [2024-07-24 08:51:59.486274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.525 [2024-07-24 08:51:59.580969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.785 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:21.786 08:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:22.724 08:52:00 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.724 00:06:22.724 real 0m1.416s 00:06:22.724 user 0m1.267s 00:06:22.724 sys 0m0.151s 00:06:22.724 08:52:00 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.724 08:52:00 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:22.724 ************************************ 00:06:22.724 END TEST accel_dualcast 00:06:22.724 ************************************ 00:06:22.984 08:52:00 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:22.984 08:52:00 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:22.984 08:52:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.984 08:52:00 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.984 ************************************ 00:06:22.984 START TEST accel_compare 00:06:22.984 ************************************ 00:06:22.984 08:52:00 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:22.984 08:52:00 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:22.984 [2024-07-24 08:52:00.886510] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:22.984 [2024-07-24 08:52:00.886570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646732 ] 00:06:22.984 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.984 [2024-07-24 08:52:00.918691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.985 [2024-07-24 08:52:00.948920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.985 [2024-07-24 08:52:01.043828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:23.244 08:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:24.183 08:52:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.183 00:06:24.183 real 0m1.414s 00:06:24.183 user 0m1.264s 00:06:24.183 sys 0m0.152s 00:06:24.183 08:52:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.183 08:52:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:24.183 ************************************ 00:06:24.183 END TEST accel_compare 00:06:24.183 ************************************ 00:06:24.442 08:52:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:24.442 08:52:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:24.442 08:52:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.442 08:52:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.442 ************************************ 00:06:24.442 START TEST accel_xor 00:06:24.442 ************************************ 00:06:24.442 08:52:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:24.442 08:52:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:24.442 [2024-07-24 08:52:02.346921] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:24.442 [2024-07-24 08:52:02.346995] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3646892 ] 00:06:24.442 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.442 [2024-07-24 08:52:02.379620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:24.442 [2024-07-24 08:52:02.408765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.442 [2024-07-24 08:52:02.499943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.702 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:24.703 08:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:25.641 08:52:03 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.641 00:06:25.641 real 0m1.393s 00:06:25.641 user 0m1.251s 00:06:25.641 sys 0m0.145s 00:06:25.641 08:52:03 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.641 08:52:03 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:25.641 ************************************ 00:06:25.641 END TEST accel_xor 00:06:25.641 ************************************ 00:06:25.641 08:52:03 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:25.641 08:52:03 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:25.641 08:52:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.641 08:52:03 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.901 ************************************ 00:06:25.901 START TEST accel_xor 00:06:25.901 ************************************ 00:06:25.901 08:52:03 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:25.901 08:52:03 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:25.901 [2024-07-24 08:52:03.786814] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:25.901 [2024-07-24 08:52:03.786876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647167 ] 00:06:25.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.901 [2024-07-24 08:52:03.819279] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.901 [2024-07-24 08:52:03.849217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.901 [2024-07-24 08:52:03.944008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.901 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:25.902 08:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:27.284 08:52:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:27.284 00:06:27.284 real 0m1.412s 00:06:27.284 user 0m1.263s 00:06:27.284 sys 0m0.152s 00:06:27.284 08:52:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:27.284 08:52:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:27.284 ************************************ 00:06:27.284 END TEST accel_xor 00:06:27.284 ************************************ 00:06:27.284 08:52:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:27.284 08:52:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:27.284 08:52:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.284 08:52:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:27.284 ************************************ 00:06:27.284 START TEST accel_dif_verify 00:06:27.284 ************************************ 00:06:27.284 08:52:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:27.284 08:52:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:27.284 [2024-07-24 08:52:05.240538] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:27.284 [2024-07-24 08:52:05.240600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647324 ] 00:06:27.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.284 [2024-07-24 08:52:05.273443] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:27.284 [2024-07-24 08:52:05.303958] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.284 [2024-07-24 08:52:05.399140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:27.545 08:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:28.924 08:52:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.924 00:06:28.924 real 0m1.417s 00:06:28.924 user 0m1.270s 00:06:28.924 sys 0m0.152s 00:06:28.924 08:52:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.924 08:52:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:28.924 ************************************ 00:06:28.924 END TEST accel_dif_verify 00:06:28.924 ************************************ 00:06:28.924 08:52:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:28.924 08:52:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:28.924 08:52:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.924 08:52:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.924 ************************************ 00:06:28.924 START TEST accel_dif_generate 00:06:28.924 ************************************ 00:06:28.924 08:52:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:28.924 [2024-07-24 08:52:06.703912] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:28.924 [2024-07-24 08:52:06.703977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647477 ] 00:06:28.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.924 [2024-07-24 08:52:06.736010] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.924 [2024-07-24 08:52:06.765611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.924 [2024-07-24 08:52:06.859836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.924 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:28.925 08:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:30.306 08:52:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.306 00:06:30.306 real 0m1.415s 00:06:30.306 user 0m1.261s 00:06:30.306 sys 0m0.158s 00:06:30.306 08:52:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.306 08:52:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:30.306 ************************************ 00:06:30.306 END TEST accel_dif_generate 00:06:30.306 ************************************ 00:06:30.306 08:52:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:30.306 08:52:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:30.306 08:52:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.306 08:52:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:30.306 ************************************ 00:06:30.306 START TEST accel_dif_generate_copy 00:06:30.306 ************************************ 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:30.306 [2024-07-24 08:52:08.157549] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:30.306 [2024-07-24 08:52:08.157605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647690 ] 00:06:30.306 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.306 [2024-07-24 08:52:08.189878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.306 [2024-07-24 08:52:08.219798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.306 [2024-07-24 08:52:08.315018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:30.306 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:30.307 08:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:31.687 08:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.687 00:06:31.687 real 0m1.416s 00:06:31.687 user 0m1.263s 00:06:31.688 sys 0m0.156s 00:06:31.688 08:52:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.688 08:52:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:31.688 ************************************ 00:06:31.688 END TEST accel_dif_generate_copy 00:06:31.688 ************************************ 00:06:31.688 08:52:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:31.688 08:52:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.688 08:52:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:31.688 08:52:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.688 08:52:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.688 ************************************ 00:06:31.688 START TEST accel_comp 00:06:31.688 ************************************ 00:06:31.688 08:52:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:31.688 08:52:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:31.688 [2024-07-24 08:52:09.619820] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:31.688 [2024-07-24 08:52:09.619874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3647912 ] 00:06:31.688 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.688 [2024-07-24 08:52:09.651913] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.688 [2024-07-24 08:52:09.681755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.688 [2024-07-24 08:52:09.776456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:31.947 08:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:32.920 08:52:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.920 00:06:32.920 real 0m1.419s 00:06:32.920 user 0m1.277s 00:06:32.920 sys 0m0.146s 00:06:32.920 08:52:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.920 08:52:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:32.920 ************************************ 00:06:32.920 END TEST accel_comp 00:06:32.920 ************************************ 00:06:33.181 08:52:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.181 08:52:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:33.181 08:52:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.181 08:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.181 ************************************ 00:06:33.181 START TEST accel_decomp 00:06:33.181 ************************************ 00:06:33.181 08:52:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:33.181 08:52:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:33.181 [2024-07-24 08:52:11.085507] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:33.181 [2024-07-24 08:52:11.085575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648066 ] 00:06:33.181 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.181 [2024-07-24 08:52:11.117688] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:33.181 [2024-07-24 08:52:11.147267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.181 [2024-07-24 08:52:11.242467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.439 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:33.440 08:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:34.377 08:52:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:34.377 00:06:34.377 real 0m1.419s 00:06:34.377 user 0m1.275s 00:06:34.377 sys 0m0.148s 00:06:34.377 08:52:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.377 08:52:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:34.377 ************************************ 00:06:34.377 END TEST accel_decomp 00:06:34.377 ************************************ 00:06:34.636 08:52:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.636 08:52:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:34.636 08:52:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.636 08:52:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.636 ************************************ 00:06:34.636 START TEST accel_decomp_full 00:06:34.636 ************************************ 00:06:34.636 08:52:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:34.636 08:52:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:34.636 [2024-07-24 08:52:12.551769] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:34.636 [2024-07-24 08:52:12.551838] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648225 ] 00:06:34.636 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.636 [2024-07-24 08:52:12.583214] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.636 [2024-07-24 08:52:12.613087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.636 [2024-07-24 08:52:12.708265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.896 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:34.897 08:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.837 08:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:35.838 08:52:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.838 00:06:35.838 real 0m1.408s 00:06:35.838 user 0m1.264s 00:06:35.838 sys 0m0.148s 00:06:35.838 08:52:13 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.838 08:52:13 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:35.838 ************************************ 00:06:35.838 END TEST accel_decomp_full 00:06:35.838 ************************************ 00:06:36.098 08:52:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.098 08:52:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:36.098 08:52:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.098 08:52:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:36.098 ************************************ 00:06:36.098 START TEST accel_decomp_mcore 00:06:36.098 ************************************ 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:36.098 08:52:13 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:36.098 [2024-07-24 08:52:14.003895] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:36.098 [2024-07-24 08:52:14.003959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648501 ] 00:06:36.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.098 [2024-07-24 08:52:14.036035] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:36.098 [2024-07-24 08:52:14.065774] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.098 [2024-07-24 08:52:14.164123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.098 [2024-07-24 08:52:14.164178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.098 [2024-07-24 08:52:14.164266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.098 [2024-07-24 08:52:14.164269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.358 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:36.359 08:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.295 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.296 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.556 00:06:37.556 real 0m1.426s 00:06:37.556 user 0m4.739s 00:06:37.556 sys 0m0.156s 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.556 08:52:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:37.556 ************************************ 00:06:37.556 END TEST accel_decomp_mcore 00:06:37.556 ************************************ 00:06:37.556 08:52:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.556 08:52:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:37.556 08:52:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.556 08:52:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.556 ************************************ 00:06:37.556 START TEST accel_decomp_full_mcore 00:06:37.556 ************************************ 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:37.556 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:37.556 [2024-07-24 08:52:15.475386] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:37.556 [2024-07-24 08:52:15.475466] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648663 ] 00:06:37.556 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.556 [2024-07-24 08:52:15.508284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:37.556 [2024-07-24 08:52:15.538633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.556 [2024-07-24 08:52:15.636363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.556 [2024-07-24 08:52:15.636427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.556 [2024-07-24 08:52:15.636523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.556 [2024-07-24 08:52:15.636526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.816 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:37.817 08:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:39.197 00:06:39.197 real 0m1.437s 00:06:39.197 user 0m4.780s 00:06:39.197 sys 0m0.157s 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.197 08:52:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:39.197 ************************************ 00:06:39.197 END TEST accel_decomp_full_mcore 00:06:39.197 ************************************ 00:06:39.197 08:52:16 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:39.197 08:52:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:39.197 08:52:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.197 08:52:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.197 ************************************ 00:06:39.197 START TEST accel_decomp_mthread 00:06:39.197 ************************************ 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:39.197 08:52:16 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:39.197 [2024-07-24 08:52:16.954890] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:39.197 [2024-07-24 08:52:16.954944] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3648817 ] 00:06:39.197 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.197 [2024-07-24 08:52:16.986432] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:39.197 [2024-07-24 08:52:17.016234] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.197 [2024-07-24 08:52:17.111163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.197 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:39.198 08:52:17 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.580 00:06:40.580 real 0m1.410s 00:06:40.580 user 0m1.266s 00:06:40.580 sys 0m0.148s 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.580 08:52:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:40.580 ************************************ 00:06:40.580 END TEST accel_decomp_mthread 00:06:40.580 ************************************ 00:06:40.581 08:52:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.581 08:52:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:40.581 08:52:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.581 08:52:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.581 ************************************ 00:06:40.581 START TEST accel_decomp_full_mthread 00:06:40.581 ************************************ 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:40.581 [2024-07-24 08:52:18.412329] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:40.581 [2024-07-24 08:52:18.412420] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649097 ] 00:06:40.581 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.581 [2024-07-24 08:52:18.444822] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:40.581 [2024-07-24 08:52:18.474708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.581 [2024-07-24 08:52:18.565898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:40.581 08:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.960 00:06:41.960 real 0m1.452s 00:06:41.960 user 0m1.306s 00:06:41.960 sys 0m0.149s 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.960 08:52:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:41.960 ************************************ 00:06:41.960 END TEST accel_decomp_full_mthread 00:06:41.960 ************************************ 00:06:41.960 08:52:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:41.960 08:52:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:41.960 08:52:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:41.960 08:52:19 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:41.960 08:52:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.960 08:52:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.960 08:52:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.960 08:52:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.960 08:52:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.960 08:52:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.960 08:52:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.960 08:52:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:41.960 08:52:19 accel -- accel/accel.sh@41 -- # jq -r . 00:06:41.960 ************************************ 00:06:41.960 START TEST accel_dif_functional_tests 00:06:41.960 ************************************ 00:06:41.960 08:52:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:41.960 [2024-07-24 08:52:19.930699] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:41.960 [2024-07-24 08:52:19.930759] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649253 ] 00:06:41.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.960 [2024-07-24 08:52:19.961596] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:41.960 [2024-07-24 08:52:19.991463] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.220 [2024-07-24 08:52:20.090383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.220 [2024-07-24 08:52:20.090448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.220 [2024-07-24 08:52:20.090452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.220 00:06:42.220 00:06:42.221 CUnit - A unit testing framework for C - Version 2.1-3 00:06:42.221 http://cunit.sourceforge.net/ 00:06:42.221 00:06:42.221 00:06:42.221 Suite: accel_dif 00:06:42.221 Test: verify: DIF generated, GUARD check ...passed 00:06:42.221 Test: verify: DIF generated, APPTAG check ...passed 00:06:42.221 Test: verify: DIF generated, REFTAG check ...passed 00:06:42.221 Test: verify: DIF not generated, GUARD check ...[2024-07-24 08:52:20.186136] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:42.221 passed 00:06:42.221 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 08:52:20.186223] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:42.221 passed 00:06:42.221 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 08:52:20.186258] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:42.221 passed 00:06:42.221 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:42.221 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 08:52:20.186320] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:42.221 passed 00:06:42.221 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:42.221 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:42.221 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:42.221 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 08:52:20.186479] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:42.221 passed 00:06:42.221 Test: verify copy: DIF generated, GUARD check ...passed 00:06:42.221 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:42.221 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:42.221 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 08:52:20.186628] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:42.221 passed 00:06:42.221 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 08:52:20.186663] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:42.221 passed 00:06:42.221 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 08:52:20.186695] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:42.221 passed 00:06:42.221 Test: generate copy: DIF generated, GUARD check ...passed 00:06:42.221 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:42.221 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:42.221 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:42.221 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:42.221 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:42.221 Test: generate copy: iovecs-len validate ...[2024-07-24 08:52:20.186935] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:42.221 passed 00:06:42.221 Test: generate copy: buffer alignment validate ...passed 00:06:42.221 00:06:42.221 Run Summary: Type Total Ran Passed Failed Inactive 00:06:42.221 suites 1 1 n/a 0 0 00:06:42.221 tests 26 26 26 0 0 00:06:42.221 asserts 115 115 115 0 n/a 00:06:42.221 00:06:42.221 Elapsed time = 0.002 seconds 00:06:42.480 00:06:42.480 real 0m0.514s 00:06:42.480 user 0m0.802s 00:06:42.480 sys 0m0.180s 00:06:42.480 08:52:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.480 08:52:20 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 END TEST accel_dif_functional_tests 00:06:42.480 ************************************ 00:06:42.480 00:06:42.480 real 0m31.909s 00:06:42.480 user 0m35.225s 00:06:42.480 sys 0m4.718s 00:06:42.480 08:52:20 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:42.480 08:52:20 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 END TEST accel 00:06:42.480 ************************************ 00:06:42.480 08:52:20 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:42.480 08:52:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.480 08:52:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.480 08:52:20 -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 ************************************ 00:06:42.480 START TEST accel_rpc 00:06:42.480 ************************************ 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:42.480 * Looking for test storage... 00:06:42.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:42.480 08:52:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:42.480 08:52:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3649332 00:06:42.480 08:52:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:42.480 08:52:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3649332 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3649332 ']' 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.480 08:52:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.480 [2024-07-24 08:52:20.572341] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:42.480 [2024-07-24 08:52:20.572449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649332 ] 00:06:42.740 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.740 [2024-07-24 08:52:20.606430] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:42.740 [2024-07-24 08:52:20.633029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.740 [2024-07-24 08:52:20.721035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.740 08:52:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.740 08:52:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:42.740 08:52:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:42.740 08:52:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:42.740 08:52:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:42.740 08:52:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:42.740 08:52:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:42.740 08:52:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.740 08:52:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.740 08:52:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.740 ************************************ 00:06:42.740 START TEST accel_assign_opcode 00:06:42.740 ************************************ 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.740 [2024-07-24 08:52:20.821813] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.740 [2024-07-24 08:52:20.829822] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.740 08:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:42.998 08:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.257 software 00:06:43.257 00:06:43.257 real 0m0.305s 00:06:43.257 user 0m0.038s 00:06:43.257 sys 0m0.009s 00:06:43.257 08:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.257 08:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:43.257 ************************************ 00:06:43.257 END TEST accel_assign_opcode 00:06:43.257 ************************************ 00:06:43.257 08:52:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3649332 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3649332 ']' 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3649332 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3649332 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3649332' 00:06:43.257 killing process with pid 3649332 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@967 -- # kill 3649332 00:06:43.257 08:52:21 accel_rpc -- common/autotest_common.sh@972 -- # wait 3649332 00:06:43.515 00:06:43.515 real 0m1.127s 00:06:43.515 user 0m1.061s 00:06:43.515 sys 0m0.440s 00:06:43.515 08:52:21 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.515 08:52:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.515 ************************************ 00:06:43.516 END TEST accel_rpc 00:06:43.516 ************************************ 00:06:43.516 08:52:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:43.516 08:52:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:43.516 08:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.516 08:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:43.774 ************************************ 00:06:43.774 START TEST app_cmdline 00:06:43.774 ************************************ 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:43.774 * Looking for test storage... 00:06:43.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:43.774 08:52:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:43.774 08:52:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3649594 00:06:43.774 08:52:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:43.774 08:52:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3649594 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3649594 ']' 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:43.774 08:52:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:43.774 [2024-07-24 08:52:21.753235] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:43.774 [2024-07-24 08:52:21.753322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3649594 ] 00:06:43.774 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.774 [2024-07-24 08:52:21.785659] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:43.774 [2024-07-24 08:52:21.811902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.033 [2024-07-24 08:52:21.900705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.293 08:52:22 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:44.293 08:52:22 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:44.293 08:52:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:44.551 { 00:06:44.551 "version": "SPDK v24.09-pre git sha1 78cbcfdde", 00:06:44.551 "fields": { 00:06:44.551 "major": 24, 00:06:44.551 "minor": 9, 00:06:44.551 "patch": 0, 00:06:44.551 "suffix": "-pre", 00:06:44.551 "commit": "78cbcfdde" 00:06:44.551 } 00:06:44.551 } 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:44.552 08:52:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:44.552 08:52:22 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:44.810 request: 00:06:44.810 { 00:06:44.810 "method": "env_dpdk_get_mem_stats", 00:06:44.810 "req_id": 1 00:06:44.810 } 00:06:44.810 Got JSON-RPC error response 00:06:44.810 response: 00:06:44.810 { 00:06:44.810 "code": -32601, 00:06:44.810 "message": "Method not found" 00:06:44.810 } 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:44.810 08:52:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3649594 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3649594 ']' 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3649594 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3649594 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3649594' 00:06:44.810 killing process with pid 3649594 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@967 -- # kill 3649594 00:06:44.810 08:52:22 app_cmdline -- common/autotest_common.sh@972 -- # wait 3649594 00:06:45.068 00:06:45.068 real 0m1.513s 00:06:45.068 user 0m1.873s 00:06:45.068 sys 0m0.468s 00:06:45.068 08:52:23 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.068 08:52:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:45.068 ************************************ 00:06:45.068 END TEST app_cmdline 00:06:45.068 ************************************ 00:06:45.327 08:52:23 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:45.327 08:52:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:45.327 08:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.327 08:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 ************************************ 00:06:45.327 START TEST version 00:06:45.327 ************************************ 00:06:45.327 08:52:23 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:45.327 * Looking for test storage... 00:06:45.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:45.327 08:52:23 version -- app/version.sh@17 -- # get_header_version major 00:06:45.327 08:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.327 08:52:23 version -- app/version.sh@17 -- # major=24 00:06:45.327 08:52:23 version -- app/version.sh@18 -- # get_header_version minor 00:06:45.327 08:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.327 08:52:23 version -- app/version.sh@18 -- # minor=9 00:06:45.327 08:52:23 version -- app/version.sh@19 -- # get_header_version patch 00:06:45.327 08:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.327 08:52:23 version -- app/version.sh@19 -- # patch=0 00:06:45.327 08:52:23 version -- app/version.sh@20 -- # get_header_version suffix 00:06:45.327 08:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:45.327 08:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:45.327 08:52:23 version -- app/version.sh@20 -- # suffix=-pre 00:06:45.327 08:52:23 version -- app/version.sh@22 -- # version=24.9 00:06:45.327 08:52:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:45.327 08:52:23 version -- app/version.sh@28 -- # version=24.9rc0 00:06:45.327 08:52:23 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:45.327 08:52:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:45.327 08:52:23 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:45.327 08:52:23 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:45.327 00:06:45.327 real 0m0.105s 00:06:45.327 user 0m0.053s 00:06:45.327 sys 0m0.075s 00:06:45.327 08:52:23 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.327 08:52:23 version -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 ************************************ 00:06:45.327 END TEST version 00:06:45.327 ************************************ 00:06:45.327 08:52:23 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@198 -- # uname -s 00:06:45.327 08:52:23 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:45.327 08:52:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:45.327 08:52:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:45.327 08:52:23 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:45.327 08:52:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:45.327 08:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 08:52:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:45.327 08:52:23 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:45.327 08:52:23 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:45.327 08:52:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.327 08:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.327 08:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:45.327 ************************************ 00:06:45.327 START TEST nvmf_tcp 00:06:45.327 ************************************ 00:06:45.327 08:52:23 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:45.327 * Looking for test storage... 00:06:45.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:45.586 08:52:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:45.586 08:52:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:45.586 08:52:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:45.586 08:52:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.586 08:52:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.586 08:52:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:45.586 ************************************ 00:06:45.586 START TEST nvmf_target_core 00:06:45.586 ************************************ 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:45.586 * Looking for test storage... 00:06:45.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.586 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:45.587 ************************************ 00:06:45.587 START TEST nvmf_abort 00:06:45.587 ************************************ 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:45.587 * Looking for test storage... 00:06:45.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:45.587 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:48.123 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:48.123 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:48.123 Found net devices under 0000:09:00.0: cvl_0_0 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:48.123 Found net devices under 0000:09:00.1: cvl_0_1 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:48.123 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:48.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:48.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:06:48.124 00:06:48.124 --- 10.0.0.2 ping statistics --- 00:06:48.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.124 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:48.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:48.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:06:48.124 00:06:48.124 --- 10.0.0.1 ping statistics --- 00:06:48.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:48.124 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3651574 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3651574 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3651574 ']' 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.124 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 [2024-07-24 08:52:25.824680] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:48.124 [2024-07-24 08:52:25.824768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.124 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.124 [2024-07-24 08:52:25.861957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:48.124 [2024-07-24 08:52:25.893741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:48.124 [2024-07-24 08:52:25.986072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:48.124 [2024-07-24 08:52:25.986143] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:48.124 [2024-07-24 08:52:25.986160] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:48.124 [2024-07-24 08:52:25.986173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:48.124 [2024-07-24 08:52:25.986185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:48.124 [2024-07-24 08:52:25.986271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.124 [2024-07-24 08:52:25.986390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.124 [2024-07-24 08:52:25.986392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 [2024-07-24 08:52:26.133775] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 Malloc0 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 Delay0 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 [2024-07-24 08:52:26.206594] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:48.124 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:48.383 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.383 [2024-07-24 08:52:26.353244] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:50.922 Initializing NVMe Controllers 00:06:50.922 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:50.922 controller IO queue size 128 less than required 00:06:50.922 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:50.922 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:50.922 Initialization complete. Launching workers. 00:06:50.922 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29566 00:06:50.922 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29627, failed to submit 62 00:06:50.922 success 29570, unsuccess 57, failed 0 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:50.922 rmmod nvme_tcp 00:06:50.922 rmmod nvme_fabrics 00:06:50.922 rmmod nvme_keyring 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3651574 ']' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3651574 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3651574 ']' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3651574 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3651574 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3651574' 00:06:50.922 killing process with pid 3651574 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3651574 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3651574 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.922 08:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:52.855 00:06:52.855 real 0m7.336s 00:06:52.855 user 0m10.794s 00:06:52.855 sys 0m2.588s 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:52.855 ************************************ 00:06:52.855 END TEST nvmf_abort 00:06:52.855 ************************************ 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.855 ************************************ 00:06:52.855 START TEST nvmf_ns_hotplug_stress 00:06:52.855 ************************************ 00:06:52.855 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:53.114 * Looking for test storage... 00:06:53.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:53.114 08:52:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:53.114 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:06:55.023 Found 0000:09:00.0 (0x8086 - 0x159b) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:06:55.023 Found 0000:09:00.1 (0x8086 - 0x159b) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:06:55.023 Found net devices under 0000:09:00.0: cvl_0_0 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:06:55.023 Found net devices under 0000:09:00.1: cvl_0_1 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:55.023 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:55.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:06:55.024 00:06:55.024 --- 10.0.0.2 ping statistics --- 00:06:55.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.024 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:55.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:06:55.024 00:06:55.024 --- 10.0.0.1 ping statistics --- 00:06:55.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.024 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3653906 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3653906 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3653906 ']' 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.024 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.024 [2024-07-24 08:52:33.122203] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:06:55.024 [2024-07-24 08:52:33.122276] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.282 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.282 [2024-07-24 08:52:33.158803] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:55.282 [2024-07-24 08:52:33.185709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.282 [2024-07-24 08:52:33.274758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.282 [2024-07-24 08:52:33.274812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.282 [2024-07-24 08:52:33.274840] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.282 [2024-07-24 08:52:33.274851] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.282 [2024-07-24 08:52:33.274861] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.282 [2024-07-24 08:52:33.274953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.282 [2024-07-24 08:52:33.275017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:55.282 [2024-07-24 08:52:33.275020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.282 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.282 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:06:55.282 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.282 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.282 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.541 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.541 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:55.541 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:55.541 [2024-07-24 08:52:33.634924] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.799 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.799 08:52:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:56.057 [2024-07-24 08:52:34.138672] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.057 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.316 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:56.574 Malloc0 00:06:56.574 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:56.832 Delay0 00:06:56.832 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.090 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:57.348 NULL1 00:06:57.348 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:57.606 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3654217 00:06:57.606 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:57.606 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:06:57.606 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.606 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.864 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.122 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:58.122 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:58.380 true 00:06:58.380 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:06:58.380 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.638 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.896 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:58.896 08:52:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:59.154 true 00:06:59.154 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:06:59.154 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.089 Read completed with error (sct=0, sc=11) 00:07:00.089 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.347 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:00.347 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:00.605 true 00:07:00.605 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:00.605 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.863 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.123 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:01.123 08:52:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:01.123 true 00:07:01.383 08:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:01.383 08:52:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.318 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.318 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.576 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:02.576 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:02.576 true 00:07:02.576 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:02.576 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.834 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.092 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:03.092 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:03.350 true 00:07:03.350 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:03.350 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.283 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.283 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.541 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:04.541 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:04.799 true 00:07:04.799 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:04.799 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.056 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.313 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:05.313 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:05.570 true 00:07:05.570 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:05.570 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.503 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.762 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:06.762 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:06.762 true 00:07:06.762 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:06.762 08:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.020 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.278 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:07.278 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:07.535 true 00:07:07.535 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:07.535 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.468 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.725 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:08.725 08:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:08.982 true 00:07:08.982 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:08.982 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.240 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.498 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:09.498 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:09.757 true 00:07:09.757 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:09.757 08:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.723 08:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.723 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:10.981 08:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:10.981 08:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:11.239 true 00:07:11.239 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:11.239 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.496 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.753 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:11.753 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:12.011 true 00:07:12.011 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:12.011 08:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.947 08:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.204 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:13.204 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:13.204 true 00:07:13.462 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:13.462 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.462 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.719 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:13.720 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:13.977 true 00:07:13.977 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:13.977 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.910 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:14.910 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.167 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:15.167 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:15.428 true 00:07:15.428 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:15.428 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.688 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.946 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:15.946 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:16.203 true 00:07:16.203 08:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:16.203 08:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.136 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:17.136 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.393 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:17.393 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:17.651 true 00:07:17.651 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:17.651 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.908 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.165 08:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:18.165 08:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:18.165 true 00:07:18.423 08:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:18.423 08:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.355 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:19.355 08:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.613 08:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:19.613 08:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:19.870 true 00:07:19.870 08:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:19.870 08:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.128 08:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.128 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:20.128 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:20.385 true 00:07:20.385 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:20.385 08:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.318 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.575 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:21.575 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:21.833 true 00:07:21.833 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:21.833 08:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.090 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.348 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:22.348 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:22.605 true 00:07:22.605 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:22.605 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.863 08:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.120 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:23.120 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:23.378 true 00:07:23.378 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:23.378 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.749 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.749 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.749 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:24.749 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:25.007 true 00:07:25.007 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:25.007 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.264 08:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.521 08:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:25.521 08:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:25.779 true 00:07:25.779 08:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:25.779 08:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.710 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.711 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.711 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.967 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:26.967 08:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:26.967 true 00:07:26.967 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:26.967 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.225 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.506 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:27.506 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:27.768 true 00:07:27.768 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:27.768 08:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.699 Initializing NVMe Controllers 00:07:28.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.699 Controller IO queue size 128, less than required. 00:07:28.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.699 Controller IO queue size 128, less than required. 00:07:28.699 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:28.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:28.699 Initialization complete. Launching workers. 00:07:28.699 ======================================================== 00:07:28.699 Latency(us) 00:07:28.699 Device Information : IOPS MiB/s Average min max 00:07:28.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 771.30 0.38 86374.56 2627.63 1012296.48 00:07:28.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10624.97 5.19 12011.89 3101.60 450190.68 00:07:28.699 ======================================================== 00:07:28.699 Total : 11396.27 5.56 17044.76 2627.63 1012296.48 00:07:28.699 00:07:28.699 08:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.956 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:28.956 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:29.213 true 00:07:29.213 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3654217 00:07:29.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3654217) - No such process 00:07:29.213 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3654217 00:07:29.213 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.470 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.727 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:29.727 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:29.727 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:29.727 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.727 08:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:29.985 null0 00:07:29.985 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.985 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.985 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:30.243 null1 00:07:30.243 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.243 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.243 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:30.501 null2 00:07:30.501 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.501 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.501 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:30.759 null3 00:07:30.759 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:30.760 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:30.760 08:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:31.017 null4 00:07:31.017 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.017 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.017 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:31.275 null5 00:07:31.275 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.275 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.275 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:31.532 null6 00:07:31.532 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.532 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.532 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:31.791 null7 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:31.791 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3658292 3658293 3658295 3658297 3658299 3658301 3658304 3658306 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.792 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.049 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.049 08:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.050 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.050 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.050 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.050 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.050 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.050 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.308 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.566 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.824 08:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.082 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.339 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.340 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.597 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.854 08:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.112 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.370 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:34.628 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:34.628 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:34.628 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:34.628 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:34.628 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.886 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:34.886 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:34.886 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:34.886 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:34.886 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:34.886 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.144 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.402 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:35.661 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:35.917 08:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.174 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.432 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:36.689 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.689 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.689 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:36.689 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:36.690 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:36.948 08:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:37.206 rmmod nvme_tcp 00:07:37.206 rmmod nvme_fabrics 00:07:37.206 rmmod nvme_keyring 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3653906 ']' 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3653906 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3653906 ']' 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3653906 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:37.206 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3653906 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3653906' 00:07:37.464 killing process with pid 3653906 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3653906 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3653906 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:37.464 08:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:40.011 00:07:40.011 real 0m46.673s 00:07:40.011 user 3m28.776s 00:07:40.011 sys 0m18.200s 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:40.011 ************************************ 00:07:40.011 END TEST nvmf_ns_hotplug_stress 00:07:40.011 ************************************ 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:40.011 ************************************ 00:07:40.011 START TEST nvmf_delete_subsystem 00:07:40.011 ************************************ 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:40.011 * Looking for test storage... 00:07:40.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:40.011 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:40.012 08:53:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.917 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.917 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.917 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:41.918 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:41.918 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:41.918 Found net devices under 0000:09:00.0: cvl_0_0 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:41.918 Found net devices under 0000:09:00.1: cvl_0_1 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.918 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:07:41.919 00:07:41.919 --- 10.0.0.2 ping statistics --- 00:07:41.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.919 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:41.919 00:07:41.919 --- 10.0.0.1 ping statistics --- 00:07:41.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.919 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3661102 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3661102 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3661102 ']' 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.919 08:53:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.919 [2024-07-24 08:53:19.950617] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:07:41.919 [2024-07-24 08:53:19.950689] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.919 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.919 [2024-07-24 08:53:19.987471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.919 [2024-07-24 08:53:20.017382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:42.178 [2024-07-24 08:53:20.109727] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:42.178 [2024-07-24 08:53:20.109779] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:42.178 [2024-07-24 08:53:20.109807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:42.178 [2024-07-24 08:53:20.109825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:42.178 [2024-07-24 08:53:20.109834] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:42.178 [2024-07-24 08:53:20.109915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.178 [2024-07-24 08:53:20.109921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 [2024-07-24 08:53:20.243345] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 [2024-07-24 08:53:20.259573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 NULL1 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 Delay0 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3661206 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:42.178 08:53:20 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:42.436 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.436 [2024-07-24 08:53:20.334253] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:44.333 08:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.333 08:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.333 08:53:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 [2024-07-24 08:53:22.472164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a4800d000 is same with the state(5) to be set 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 starting I/O failed: -6 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Write completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.591 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 starting I/O failed: -6 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 Read completed with error (sct=0, sc=8) 00:07:44.592 Write completed with error (sct=0, sc=8) 00:07:44.592 starting I/O failed: -6 00:07:44.592 starting I/O failed: -6 00:07:45.524 [2024-07-24 08:53:23.431058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1872b40 is same with the state(5) to be set 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 [2024-07-24 08:53:23.471012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1855100 is same with the state(5) to be set 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 [2024-07-24 08:53:23.471997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a4800d330 is same with the state(5) to be set 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 [2024-07-24 08:53:23.474928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185b300 is same with the state(5) to be set 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 Read completed with error (sct=0, sc=8) 00:07:45.524 Write completed with error (sct=0, sc=8) 00:07:45.524 [2024-07-24 08:53:23.475163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1854d40 is same with the state(5) to be set 00:07:45.524 Initializing NVMe Controllers 00:07:45.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.524 Controller IO queue size 128, less than required. 00:07:45.524 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:45.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:45.524 Initialization complete. Launching workers. 00:07:45.524 ======================================================== 00:07:45.524 Latency(us) 00:07:45.524 Device Information : IOPS MiB/s Average min max 00:07:45.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.61 0.09 965031.48 767.24 1011461.09 00:07:45.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.88 0.07 894358.40 440.39 1012052.79 00:07:45.524 ======================================================== 00:07:45.524 Total : 338.49 0.17 933529.11 440.39 1012052.79 00:07:45.524 00:07:45.524 [2024-07-24 08:53:23.475990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1872b40 (9): Bad file descriptor 00:07:45.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:45.524 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.524 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:45.524 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3661206 00:07:45.524 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3661206 00:07:46.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3661206) - No such process 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3661206 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3661206 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3661206 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.116 08:53:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.116 [2024-07-24 08:53:24.000230] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3661616 00:07:46.116 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:46.117 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:46.117 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:46.117 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.117 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.117 [2024-07-24 08:53:24.058172] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:46.682 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.682 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:46.682 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.939 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.939 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:46.939 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.504 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.504 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:47.504 08:53:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.070 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.070 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:48.070 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.635 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.635 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:48.635 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.201 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.201 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:49.201 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.201 Initializing NVMe Controllers 00:07:49.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:49.201 Controller IO queue size 128, less than required. 00:07:49.201 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:49.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:49.201 Initialization complete. Launching workers. 00:07:49.201 ======================================================== 00:07:49.201 Latency(us) 00:07:49.201 Device Information : IOPS MiB/s Average min max 00:07:49.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004054.51 1000239.81 1041410.13 00:07:49.201 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004690.84 1000219.87 1041663.78 00:07:49.201 ======================================================== 00:07:49.201 Total : 256.00 0.12 1004372.67 1000219.87 1041663.78 00:07:49.201 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3661616 00:07:49.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3661616) - No such process 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3661616 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.459 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.459 rmmod nvme_tcp 00:07:49.459 rmmod nvme_fabrics 00:07:49.459 rmmod nvme_keyring 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3661102 ']' 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3661102 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3661102 ']' 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3661102 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3661102 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3661102' 00:07:49.718 killing process with pid 3661102 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3661102 00:07:49.718 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3661102 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.977 08:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.882 00:07:51.882 real 0m12.237s 00:07:51.882 user 0m27.710s 00:07:51.882 sys 0m2.880s 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.882 ************************************ 00:07:51.882 END TEST nvmf_delete_subsystem 00:07:51.882 ************************************ 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.882 ************************************ 00:07:51.882 START TEST nvmf_host_management 00:07:51.882 ************************************ 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.882 * Looking for test storage... 00:07:51.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.882 08:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.141 08:53:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:52.141 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.141 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.141 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.141 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.141 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.141 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.142 08:53:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.044 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.044 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:54.044 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:54.044 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:54.044 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:07:54.045 Found 0000:09:00.0 (0x8086 - 0x159b) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:07:54.045 Found 0000:09:00.1 (0x8086 - 0x159b) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:07:54.045 Found net devices under 0000:09:00.0: cvl_0_0 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:07:54.045 Found net devices under 0000:09:00.1: cvl_0_1 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.045 08:53:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:54.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:07:54.045 00:07:54.045 --- 10.0.0.2 ping statistics --- 00:07:54.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.045 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:07:54.045 00:07:54.045 --- 10.0.0.1 ping statistics --- 00:07:54.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.045 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.045 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3663960 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3663960 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3663960 ']' 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.046 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.046 [2024-07-24 08:53:32.085628] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:07:54.046 [2024-07-24 08:53:32.085714] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.046 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.046 [2024-07-24 08:53:32.122352] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.046 [2024-07-24 08:53:32.154479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:54.304 [2024-07-24 08:53:32.247784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.304 [2024-07-24 08:53:32.247839] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.304 [2024-07-24 08:53:32.247855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.304 [2024-07-24 08:53:32.247868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.304 [2024-07-24 08:53:32.247885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.304 [2024-07-24 08:53:32.247979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.304 [2024-07-24 08:53:32.248076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:54.304 [2024-07-24 08:53:32.248300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:54.304 [2024-07-24 08:53:32.248303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.304 [2024-07-24 08:53:32.403370] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:54.304 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:54.305 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:54.305 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.305 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 Malloc0 00:07:54.563 [2024-07-24 08:53:32.463734] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3664118 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3664118 /var/tmp/bdevperf.sock 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3664118 ']' 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:54.563 { 00:07:54.563 "params": { 00:07:54.563 "name": "Nvme$subsystem", 00:07:54.563 "trtype": "$TEST_TRANSPORT", 00:07:54.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:54.563 "adrfam": "ipv4", 00:07:54.563 "trsvcid": "$NVMF_PORT", 00:07:54.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:54.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:54.563 "hdgst": ${hdgst:-false}, 00:07:54.563 "ddgst": ${ddgst:-false} 00:07:54.563 }, 00:07:54.563 "method": "bdev_nvme_attach_controller" 00:07:54.563 } 00:07:54.563 EOF 00:07:54.563 )") 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:54.563 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:54.563 "params": { 00:07:54.563 "name": "Nvme0", 00:07:54.563 "trtype": "tcp", 00:07:54.563 "traddr": "10.0.0.2", 00:07:54.563 "adrfam": "ipv4", 00:07:54.563 "trsvcid": "4420", 00:07:54.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:54.563 "hdgst": false, 00:07:54.563 "ddgst": false 00:07:54.563 }, 00:07:54.563 "method": "bdev_nvme_attach_controller" 00:07:54.563 }' 00:07:54.563 [2024-07-24 08:53:32.543983] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:07:54.563 [2024-07-24 08:53:32.544055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664118 ] 00:07:54.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.563 [2024-07-24 08:53:32.575896] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.563 [2024-07-24 08:53:32.604851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.822 [2024-07-24 08:53:32.693243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.822 Running I/O for 10 seconds... 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:54.822 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:54.823 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:54.823 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:54.823 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.823 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.081 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.081 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:55.081 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:55.081 08:53:32 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:55.340 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.341 [2024-07-24 08:53:33.269100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.269976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.269991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.341 [2024-07-24 08:53:33.270773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 [2024-07-24 08:53:33.270917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.341 [2024-07-24 08:53:33.270931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.341 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:55.341 [2024-07-24 08:53:33.270946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.342 [2024-07-24 08:53:33.270960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.342 [2024-07-24 08:53:33.270975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.342 [2024-07-24 08:53:33.270988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.342 [2024-07-24 08:53:33.271004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.342 [2024-07-24 08:53:33.271018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.342 [2024-07-24 08:53:33.271033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.342 [2024-07-24 08:53:33.271047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.342 [2024-07-24 08:53:33.271063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.342 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:55.342 [2024-07-24 08:53:33.271076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.342 [2024-07-24 08:53:33.271094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:55.342 [2024-07-24 08:53:33.271114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:55.342 [2024-07-24 08:53:33.271134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbb5f0 is same with the state(5) to be set 00:07:55.342 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.342 [2024-07-24 08:53:33.271222] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bbb5f0 was disconnected and freed. reset controller. 00:07:55.342 [2024-07-24 08:53:33.272414] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:55.342 task offset: 81792 on job bdev=Nvme0n1 fails 00:07:55.342 00:07:55.342 Latency(us) 00:07:55.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.342 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:55.342 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:55.342 Verification LBA range: start 0x0 length 0x400 00:07:55.342 Nvme0n1 : 0.40 1436.23 89.76 159.58 0.00 38969.27 6893.42 34564.17 00:07:55.342 =================================================================================================================== 00:07:55.342 Total : 1436.23 89.76 159.58 0.00 38969.27 6893.42 34564.17 00:07:55.342 [2024-07-24 08:53:33.274499] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.342 [2024-07-24 08:53:33.274547] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1789b50 (9): Bad file descriptor 00:07:55.342 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:55.342 08:53:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:55.342 [2024-07-24 08:53:33.279720] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3664118 00:07:56.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3664118) - No such process 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:56.275 { 00:07:56.275 "params": { 00:07:56.275 "name": "Nvme$subsystem", 00:07:56.275 "trtype": "$TEST_TRANSPORT", 00:07:56.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:56.275 "adrfam": "ipv4", 00:07:56.275 "trsvcid": "$NVMF_PORT", 00:07:56.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:56.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:56.275 "hdgst": ${hdgst:-false}, 00:07:56.275 "ddgst": ${ddgst:-false} 00:07:56.275 }, 00:07:56.275 "method": "bdev_nvme_attach_controller" 00:07:56.275 } 00:07:56.275 EOF 00:07:56.275 )") 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:56.275 08:53:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:56.275 "params": { 00:07:56.275 "name": "Nvme0", 00:07:56.275 "trtype": "tcp", 00:07:56.275 "traddr": "10.0.0.2", 00:07:56.275 "adrfam": "ipv4", 00:07:56.275 "trsvcid": "4420", 00:07:56.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:56.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:56.275 "hdgst": false, 00:07:56.275 "ddgst": false 00:07:56.275 }, 00:07:56.275 "method": "bdev_nvme_attach_controller" 00:07:56.275 }' 00:07:56.275 [2024-07-24 08:53:34.324074] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:07:56.275 [2024-07-24 08:53:34.324181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3664284 ] 00:07:56.276 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.276 [2024-07-24 08:53:34.356338] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.276 [2024-07-24 08:53:34.385112] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.534 [2024-07-24 08:53:34.473064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.792 Running I/O for 1 seconds... 00:07:57.726 00:07:57.726 Latency(us) 00:07:57.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.726 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:57.726 Verification LBA range: start 0x0 length 0x400 00:07:57.726 Nvme0n1 : 1.03 1308.74 81.80 0.00 0.00 48190.83 8932.31 41748.86 00:07:57.726 =================================================================================================================== 00:07:57.726 Total : 1308.74 81.80 0.00 0.00 48190.83 8932.31 41748.86 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:57.984 rmmod nvme_tcp 00:07:57.984 rmmod nvme_fabrics 00:07:57.984 rmmod nvme_keyring 00:07:57.984 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3663960 ']' 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3663960 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3663960 ']' 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3663960 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3663960 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3663960' 00:07:58.243 killing process with pid 3663960 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3663960 00:07:58.243 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3663960 00:07:58.243 [2024-07-24 08:53:36.354659] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.502 08:53:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:00.405 00:08:00.405 real 0m8.486s 00:08:00.405 user 0m19.002s 00:08:00.405 sys 0m2.630s 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 ************************************ 00:08:00.405 END TEST nvmf_host_management 00:08:00.405 ************************************ 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.405 ************************************ 00:08:00.405 START TEST nvmf_lvol 00:08:00.405 ************************************ 00:08:00.405 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:00.664 * Looking for test storage... 00:08:00.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.665 08:53:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:02.567 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:02.567 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.567 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:02.568 Found net devices under 0000:09:00.0: cvl_0_0 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:02.568 Found net devices under 0000:09:00.1: cvl_0_1 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:02.568 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:02.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:02.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:08:02.826 00:08:02.826 --- 10.0.0.2 ping statistics --- 00:08:02.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.826 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:02.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:02.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:08:02.826 00:08:02.826 --- 10.0.0.1 ping statistics --- 00:08:02.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:02.826 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3666477 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3666477 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3666477 ']' 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.826 08:53:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:02.826 [2024-07-24 08:53:40.771189] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:08:02.826 [2024-07-24 08:53:40.771266] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.826 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.826 [2024-07-24 08:53:40.809250] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.826 [2024-07-24 08:53:40.841194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.826 [2024-07-24 08:53:40.933634] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.826 [2024-07-24 08:53:40.933695] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.826 [2024-07-24 08:53:40.933711] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.826 [2024-07-24 08:53:40.933724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.826 [2024-07-24 08:53:40.933743] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.826 [2024-07-24 08:53:40.933831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.826 [2024-07-24 08:53:40.933908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.826 [2024-07-24 08:53:40.933906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.084 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.341 [2024-07-24 08:53:41.299678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.341 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.599 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:03.599 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.857 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:03.857 08:53:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:04.114 08:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:04.372 08:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=772c952d-1633-49c2-8211-6c9cd31cf184 00:08:04.372 08:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 772c952d-1633-49c2-8211-6c9cd31cf184 lvol 20 00:08:04.630 08:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e9c387d6-2ae6-4fb9-940d-32deaeef8ff6 00:08:04.630 08:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:04.919 08:53:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e9c387d6-2ae6-4fb9-940d-32deaeef8ff6 00:08:05.177 08:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:05.438 [2024-07-24 08:53:43.389456] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:05.438 08:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.696 08:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3666790 00:08:05.696 08:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.696 08:53:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:05.696 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.630 08:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e9c387d6-2ae6-4fb9-940d-32deaeef8ff6 MY_SNAPSHOT 00:08:06.888 08:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=1f8dd8bc-8947-405a-b321-5bc0dde4d997 00:08:06.888 08:53:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e9c387d6-2ae6-4fb9-940d-32deaeef8ff6 30 00:08:07.455 08:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 1f8dd8bc-8947-405a-b321-5bc0dde4d997 MY_CLONE 00:08:07.455 08:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4cde7c9a-7f4b-4ad0-a80c-45132594eeb8 00:08:07.455 08:53:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4cde7c9a-7f4b-4ad0-a80c-45132594eeb8 00:08:08.389 08:53:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3666790 00:08:16.498 Initializing NVMe Controllers 00:08:16.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:16.498 Controller IO queue size 128, less than required. 00:08:16.498 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:16.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:16.498 Initialization complete. Launching workers. 00:08:16.498 ======================================================== 00:08:16.498 Latency(us) 00:08:16.498 Device Information : IOPS MiB/s Average min max 00:08:16.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10707.80 41.83 11955.65 1429.91 70216.01 00:08:16.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10647.40 41.59 12030.01 1873.53 77421.93 00:08:16.498 ======================================================== 00:08:16.498 Total : 21355.20 83.42 11992.72 1429.91 77421.93 00:08:16.498 00:08:16.498 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:16.498 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e9c387d6-2ae6-4fb9-940d-32deaeef8ff6 00:08:16.498 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 772c952d-1633-49c2-8211-6c9cd31cf184 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.756 rmmod nvme_tcp 00:08:16.756 rmmod nvme_fabrics 00:08:16.756 rmmod nvme_keyring 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3666477 ']' 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3666477 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3666477 ']' 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3666477 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3666477 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3666477' 00:08:16.756 killing process with pid 3666477 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3666477 00:08:16.756 08:53:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3666477 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.325 08:53:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:19.229 00:08:19.229 real 0m18.696s 00:08:19.229 user 1m3.484s 00:08:19.229 sys 0m5.749s 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:19.229 ************************************ 00:08:19.229 END TEST nvmf_lvol 00:08:19.229 ************************************ 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.229 ************************************ 00:08:19.229 START TEST nvmf_lvs_grow 00:08:19.229 ************************************ 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:19.229 * Looking for test storage... 00:08:19.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.229 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.230 08:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:21.128 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:08:21.129 Found 0000:09:00.0 (0x8086 - 0x159b) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:08:21.129 Found 0000:09:00.1 (0x8086 - 0x159b) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:08:21.129 Found net devices under 0000:09:00.0: cvl_0_0 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:08:21.129 Found net devices under 0000:09:00.1: cvl_0_1 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:21.129 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:21.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:21.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:08:21.388 00:08:21.388 --- 10.0.0.2 ping statistics --- 00:08:21.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.388 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:21.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:21.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:21.388 00:08:21.388 --- 10.0.0.1 ping statistics --- 00:08:21.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:21.388 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3670064 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3670064 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3670064 ']' 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.388 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.388 [2024-07-24 08:53:59.418067] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:08:21.388 [2024-07-24 08:53:59.418160] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.388 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.388 [2024-07-24 08:53:59.453635] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:21.388 [2024-07-24 08:53:59.479348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.646 [2024-07-24 08:53:59.567751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.646 [2024-07-24 08:53:59.567812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.646 [2024-07-24 08:53:59.567829] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.646 [2024-07-24 08:53:59.567843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.646 [2024-07-24 08:53:59.567854] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.646 [2024-07-24 08:53:59.567892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.646 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:21.646 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:21.646 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:21.647 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:21.647 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.647 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.647 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:21.905 [2024-07-24 08:53:59.931085] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:21.905 ************************************ 00:08:21.905 START TEST lvs_grow_clean 00:08:21.905 ************************************ 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:21.905 08:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:22.163 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:22.163 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:22.421 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:22.421 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:22.421 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:22.678 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:22.678 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:22.678 08:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 48418ade-5374-4cb2-aedf-479ed2ed378c lvol 150 00:08:23.244 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1 00:08:23.244 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:23.244 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:23.244 [2024-07-24 08:54:01.293437] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:23.244 [2024-07-24 08:54:01.293548] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:23.244 true 00:08:23.244 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:23.244 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:23.501 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:23.501 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:23.758 08:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1 00:08:24.016 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:24.275 [2024-07-24 08:54:02.280432] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.275 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3670607 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3670607 /var/tmp/bdevperf.sock 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3670607 ']' 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:24.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.533 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:24.533 [2024-07-24 08:54:02.631465] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:08:24.533 [2024-07-24 08:54:02.631547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3670607 ] 00:08:24.791 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.792 [2024-07-24 08:54:02.665495] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:24.792 [2024-07-24 08:54:02.696037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.792 [2024-07-24 08:54:02.789020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.792 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.792 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:24.792 08:54:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:25.356 Nvme0n1 00:08:25.356 08:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:25.613 [ 00:08:25.613 { 00:08:25.613 "name": "Nvme0n1", 00:08:25.613 "aliases": [ 00:08:25.613 "1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1" 00:08:25.613 ], 00:08:25.613 "product_name": "NVMe disk", 00:08:25.613 "block_size": 4096, 00:08:25.613 "num_blocks": 38912, 00:08:25.613 "uuid": "1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1", 00:08:25.613 "assigned_rate_limits": { 00:08:25.613 "rw_ios_per_sec": 0, 00:08:25.613 "rw_mbytes_per_sec": 0, 00:08:25.613 "r_mbytes_per_sec": 0, 00:08:25.613 "w_mbytes_per_sec": 0 00:08:25.613 }, 00:08:25.613 "claimed": false, 00:08:25.613 "zoned": false, 00:08:25.613 "supported_io_types": { 00:08:25.613 "read": true, 00:08:25.613 "write": true, 00:08:25.613 "unmap": true, 00:08:25.613 "flush": true, 00:08:25.613 "reset": true, 00:08:25.613 "nvme_admin": true, 00:08:25.613 "nvme_io": true, 00:08:25.613 "nvme_io_md": false, 00:08:25.613 "write_zeroes": true, 00:08:25.613 "zcopy": false, 00:08:25.613 "get_zone_info": false, 00:08:25.613 "zone_management": false, 00:08:25.613 "zone_append": false, 00:08:25.613 "compare": true, 00:08:25.613 "compare_and_write": true, 00:08:25.613 "abort": true, 00:08:25.613 "seek_hole": false, 00:08:25.613 "seek_data": false, 00:08:25.613 "copy": true, 00:08:25.613 "nvme_iov_md": false 00:08:25.613 }, 00:08:25.613 "memory_domains": [ 00:08:25.613 { 00:08:25.613 "dma_device_id": "system", 00:08:25.613 "dma_device_type": 1 00:08:25.613 } 00:08:25.613 ], 00:08:25.613 "driver_specific": { 00:08:25.613 "nvme": [ 00:08:25.613 { 00:08:25.613 "trid": { 00:08:25.613 "trtype": "TCP", 00:08:25.613 "adrfam": "IPv4", 00:08:25.613 "traddr": "10.0.0.2", 00:08:25.613 "trsvcid": "4420", 00:08:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:25.613 }, 00:08:25.613 "ctrlr_data": { 00:08:25.613 "cntlid": 1, 00:08:25.613 "vendor_id": "0x8086", 00:08:25.613 "model_number": "SPDK bdev Controller", 00:08:25.613 "serial_number": "SPDK0", 00:08:25.613 "firmware_revision": "24.09", 00:08:25.613 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.613 "oacs": { 00:08:25.613 "security": 0, 00:08:25.613 "format": 0, 00:08:25.613 "firmware": 0, 00:08:25.613 "ns_manage": 0 00:08:25.613 }, 00:08:25.613 "multi_ctrlr": true, 00:08:25.613 "ana_reporting": false 00:08:25.613 }, 00:08:25.613 "vs": { 00:08:25.613 "nvme_version": "1.3" 00:08:25.613 }, 00:08:25.613 "ns_data": { 00:08:25.613 "id": 1, 00:08:25.613 "can_share": true 00:08:25.613 } 00:08:25.613 } 00:08:25.613 ], 00:08:25.613 "mp_policy": "active_passive" 00:08:25.613 } 00:08:25.613 } 00:08:25.613 ] 00:08:25.613 08:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3670745 00:08:25.613 08:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:25.613 08:54:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:25.613 Running I/O for 10 seconds... 00:08:26.546 Latency(us) 00:08:26.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.546 Nvme0n1 : 1.00 12700.00 49.61 0.00 0.00 0.00 0.00 0.00 00:08:26.546 =================================================================================================================== 00:08:26.546 Total : 12700.00 49.61 0.00 0.00 0.00 0.00 0.00 00:08:26.546 00:08:27.477 08:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:27.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.735 Nvme0n1 : 2.00 12827.00 50.11 0.00 0.00 0.00 0.00 0.00 00:08:27.735 =================================================================================================================== 00:08:27.735 Total : 12827.00 50.11 0.00 0.00 0.00 0.00 0.00 00:08:27.735 00:08:27.735 true 00:08:27.735 08:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:27.735 08:54:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:27.994 08:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:27.994 08:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:27.994 08:54:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3670745 00:08:28.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.582 Nvme0n1 : 3.00 12869.33 50.27 0.00 0.00 0.00 0.00 0.00 00:08:28.582 =================================================================================================================== 00:08:28.582 Total : 12869.33 50.27 0.00 0.00 0.00 0.00 0.00 00:08:28.582 00:08:29.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.525 Nvme0n1 : 4.00 12954.00 50.60 0.00 0.00 0.00 0.00 0.00 00:08:29.525 =================================================================================================================== 00:08:29.525 Total : 12954.00 50.60 0.00 0.00 0.00 0.00 0.00 00:08:29.525 00:08:30.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.900 Nvme0n1 : 5.00 12992.20 50.75 0.00 0.00 0.00 0.00 0.00 00:08:30.900 =================================================================================================================== 00:08:30.900 Total : 12992.20 50.75 0.00 0.00 0.00 0.00 0.00 00:08:30.900 00:08:31.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.836 Nvme0n1 : 6.00 13059.83 51.01 0.00 0.00 0.00 0.00 0.00 00:08:31.836 =================================================================================================================== 00:08:31.836 Total : 13059.83 51.01 0.00 0.00 0.00 0.00 0.00 00:08:31.836 00:08:32.775 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.775 Nvme0n1 : 7.00 13099.14 51.17 0.00 0.00 0.00 0.00 0.00 00:08:32.775 =================================================================================================================== 00:08:32.775 Total : 13099.14 51.17 0.00 0.00 0.00 0.00 0.00 00:08:32.775 00:08:33.712 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.712 Nvme0n1 : 8.00 13144.50 51.35 0.00 0.00 0.00 0.00 0.00 00:08:33.712 =================================================================================================================== 00:08:33.712 Total : 13144.50 51.35 0.00 0.00 0.00 0.00 0.00 00:08:33.712 00:08:34.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.650 Nvme0n1 : 9.00 13179.78 51.48 0.00 0.00 0.00 0.00 0.00 00:08:34.651 =================================================================================================================== 00:08:34.651 Total : 13179.78 51.48 0.00 0.00 0.00 0.00 0.00 00:08:34.651 00:08:35.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.589 Nvme0n1 : 10.00 13220.70 51.64 0.00 0.00 0.00 0.00 0.00 00:08:35.589 =================================================================================================================== 00:08:35.589 Total : 13220.70 51.64 0.00 0.00 0.00 0.00 0.00 00:08:35.589 00:08:35.589 00:08:35.589 Latency(us) 00:08:35.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.589 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.589 Nvme0n1 : 10.01 13218.61 51.64 0.00 0.00 9676.55 5558.42 28350.39 00:08:35.589 =================================================================================================================== 00:08:35.589 Total : 13218.61 51.64 0.00 0.00 9676.55 5558.42 28350.39 00:08:35.589 0 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3670607 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3670607 ']' 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3670607 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3670607 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3670607' 00:08:35.589 killing process with pid 3670607 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3670607 00:08:35.589 Received shutdown signal, test time was about 10.000000 seconds 00:08:35.589 00:08:35.589 Latency(us) 00:08:35.589 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.589 =================================================================================================================== 00:08:35.589 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:35.589 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3670607 00:08:35.847 08:54:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:36.104 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:36.363 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:36.363 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:36.623 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:36.623 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:36.623 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:36.883 [2024-07-24 08:54:14.893886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:36.883 08:54:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:37.142 request: 00:08:37.142 { 00:08:37.142 "uuid": "48418ade-5374-4cb2-aedf-479ed2ed378c", 00:08:37.142 "method": "bdev_lvol_get_lvstores", 00:08:37.142 "req_id": 1 00:08:37.142 } 00:08:37.142 Got JSON-RPC error response 00:08:37.142 response: 00:08:37.142 { 00:08:37.142 "code": -19, 00:08:37.142 "message": "No such device" 00:08:37.142 } 00:08:37.142 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:37.142 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:37.142 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:37.143 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:37.143 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:37.401 aio_bdev 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:37.401 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:37.661 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1 -t 2000 00:08:37.920 [ 00:08:37.920 { 00:08:37.920 "name": "1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1", 00:08:37.920 "aliases": [ 00:08:37.920 "lvs/lvol" 00:08:37.920 ], 00:08:37.920 "product_name": "Logical Volume", 00:08:37.920 "block_size": 4096, 00:08:37.920 "num_blocks": 38912, 00:08:37.920 "uuid": "1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1", 00:08:37.920 "assigned_rate_limits": { 00:08:37.920 "rw_ios_per_sec": 0, 00:08:37.920 "rw_mbytes_per_sec": 0, 00:08:37.920 "r_mbytes_per_sec": 0, 00:08:37.920 "w_mbytes_per_sec": 0 00:08:37.920 }, 00:08:37.920 "claimed": false, 00:08:37.920 "zoned": false, 00:08:37.920 "supported_io_types": { 00:08:37.920 "read": true, 00:08:37.920 "write": true, 00:08:37.920 "unmap": true, 00:08:37.920 "flush": false, 00:08:37.920 "reset": true, 00:08:37.920 "nvme_admin": false, 00:08:37.920 "nvme_io": false, 00:08:37.920 "nvme_io_md": false, 00:08:37.920 "write_zeroes": true, 00:08:37.920 "zcopy": false, 00:08:37.920 "get_zone_info": false, 00:08:37.920 "zone_management": false, 00:08:37.920 "zone_append": false, 00:08:37.920 "compare": false, 00:08:37.920 "compare_and_write": false, 00:08:37.920 "abort": false, 00:08:37.920 "seek_hole": true, 00:08:37.920 "seek_data": true, 00:08:37.920 "copy": false, 00:08:37.921 "nvme_iov_md": false 00:08:37.921 }, 00:08:37.921 "driver_specific": { 00:08:37.921 "lvol": { 00:08:37.921 "lvol_store_uuid": "48418ade-5374-4cb2-aedf-479ed2ed378c", 00:08:37.921 "base_bdev": "aio_bdev", 00:08:37.921 "thin_provision": false, 00:08:37.921 "num_allocated_clusters": 38, 00:08:37.921 "snapshot": false, 00:08:37.921 "clone": false, 00:08:37.921 "esnap_clone": false 00:08:37.921 } 00:08:37.921 } 00:08:37.921 } 00:08:37.921 ] 00:08:37.921 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:37.921 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:37.921 08:54:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:38.180 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:38.180 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:38.180 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:38.439 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:38.439 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1eebb6b1-ec84-4a73-8cd0-1cf460e6cfd1 00:08:38.698 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48418ade-5374-4cb2-aedf-479ed2ed378c 00:08:38.957 08:54:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:39.215 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.215 00:08:39.215 real 0m17.182s 00:08:39.215 user 0m15.821s 00:08:39.215 sys 0m2.239s 00:08:39.215 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.215 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:39.215 ************************************ 00:08:39.215 END TEST lvs_grow_clean 00:08:39.215 ************************************ 00:08:39.215 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:39.215 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.215 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:39.216 ************************************ 00:08:39.216 START TEST lvs_grow_dirty 00:08:39.216 ************************************ 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:39.216 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.474 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:39.474 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:39.732 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:39.732 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:39.732 08:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:39.991 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:39.991 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:39.991 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe lvol 150 00:08:40.251 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:40.251 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.251 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:40.511 [2024-07-24 08:54:18.523488] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:40.511 [2024-07-24 08:54:18.523583] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:40.511 true 00:08:40.511 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:40.511 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:40.770 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:40.770 08:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.029 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:41.288 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:41.548 [2024-07-24 08:54:19.534588] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:41.548 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3673175 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3673175 /var/tmp/bdevperf.sock 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3673175 ']' 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:41.807 08:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:41.807 [2024-07-24 08:54:19.836408] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:08:41.807 [2024-07-24 08:54:19.836482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3673175 ] 00:08:41.807 EAL: No free 2048 kB hugepages reported on node 1 00:08:41.807 [2024-07-24 08:54:19.867660] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:41.807 [2024-07-24 08:54:19.899002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.065 [2024-07-24 08:54:19.997813] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.065 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:42.065 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:42.065 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.633 Nvme0n1 00:08:42.633 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:42.891 [ 00:08:42.891 { 00:08:42.891 "name": "Nvme0n1", 00:08:42.891 "aliases": [ 00:08:42.891 "265b0e2d-edf1-44ba-9287-1466f5c93e7e" 00:08:42.891 ], 00:08:42.891 "product_name": "NVMe disk", 00:08:42.891 "block_size": 4096, 00:08:42.891 "num_blocks": 38912, 00:08:42.891 "uuid": "265b0e2d-edf1-44ba-9287-1466f5c93e7e", 00:08:42.891 "assigned_rate_limits": { 00:08:42.891 "rw_ios_per_sec": 0, 00:08:42.891 "rw_mbytes_per_sec": 0, 00:08:42.891 "r_mbytes_per_sec": 0, 00:08:42.891 "w_mbytes_per_sec": 0 00:08:42.891 }, 00:08:42.891 "claimed": false, 00:08:42.891 "zoned": false, 00:08:42.891 "supported_io_types": { 00:08:42.891 "read": true, 00:08:42.891 "write": true, 00:08:42.891 "unmap": true, 00:08:42.891 "flush": true, 00:08:42.891 "reset": true, 00:08:42.891 "nvme_admin": true, 00:08:42.891 "nvme_io": true, 00:08:42.891 "nvme_io_md": false, 00:08:42.891 "write_zeroes": true, 00:08:42.891 "zcopy": false, 00:08:42.891 "get_zone_info": false, 00:08:42.891 "zone_management": false, 00:08:42.891 "zone_append": false, 00:08:42.891 "compare": true, 00:08:42.891 "compare_and_write": true, 00:08:42.891 "abort": true, 00:08:42.891 "seek_hole": false, 00:08:42.891 "seek_data": false, 00:08:42.891 "copy": true, 00:08:42.891 "nvme_iov_md": false 00:08:42.891 }, 00:08:42.891 "memory_domains": [ 00:08:42.891 { 00:08:42.891 "dma_device_id": "system", 00:08:42.891 "dma_device_type": 1 00:08:42.891 } 00:08:42.891 ], 00:08:42.891 "driver_specific": { 00:08:42.891 "nvme": [ 00:08:42.891 { 00:08:42.891 "trid": { 00:08:42.891 "trtype": "TCP", 00:08:42.891 "adrfam": "IPv4", 00:08:42.891 "traddr": "10.0.0.2", 00:08:42.891 "trsvcid": "4420", 00:08:42.891 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:42.891 }, 00:08:42.891 "ctrlr_data": { 00:08:42.891 "cntlid": 1, 00:08:42.891 "vendor_id": "0x8086", 00:08:42.891 "model_number": "SPDK bdev Controller", 00:08:42.891 "serial_number": "SPDK0", 00:08:42.891 "firmware_revision": "24.09", 00:08:42.891 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:42.891 "oacs": { 00:08:42.891 "security": 0, 00:08:42.891 "format": 0, 00:08:42.891 "firmware": 0, 00:08:42.891 "ns_manage": 0 00:08:42.891 }, 00:08:42.891 "multi_ctrlr": true, 00:08:42.891 "ana_reporting": false 00:08:42.891 }, 00:08:42.891 "vs": { 00:08:42.891 "nvme_version": "1.3" 00:08:42.891 }, 00:08:42.891 "ns_data": { 00:08:42.891 "id": 1, 00:08:42.891 "can_share": true 00:08:42.891 } 00:08:42.891 } 00:08:42.891 ], 00:08:42.891 "mp_policy": "active_passive" 00:08:42.891 } 00:08:42.891 } 00:08:42.891 ] 00:08:42.891 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3673311 00:08:42.891 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:42.891 08:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:42.891 Running I/O for 10 seconds... 00:08:43.827 Latency(us) 00:08:43.827 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.827 Nvme0n1 : 1.00 12574.00 49.12 0.00 0.00 0.00 0.00 0.00 00:08:43.827 =================================================================================================================== 00:08:43.827 Total : 12574.00 49.12 0.00 0.00 0.00 0.00 0.00 00:08:43.827 00:08:44.762 08:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:45.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.019 Nvme0n1 : 2.00 12891.00 50.36 0.00 0.00 0.00 0.00 0.00 00:08:45.019 =================================================================================================================== 00:08:45.019 Total : 12891.00 50.36 0.00 0.00 0.00 0.00 0.00 00:08:45.019 00:08:45.019 true 00:08:45.019 08:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:45.019 08:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.278 08:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.278 08:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.278 08:54:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3673311 00:08:45.847 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.847 Nvme0n1 : 3.00 12912.00 50.44 0.00 0.00 0.00 0.00 0.00 00:08:45.847 =================================================================================================================== 00:08:45.847 Total : 12912.00 50.44 0.00 0.00 0.00 0.00 0.00 00:08:45.847 00:08:46.819 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.819 Nvme0n1 : 4.00 12986.00 50.73 0.00 0.00 0.00 0.00 0.00 00:08:46.819 =================================================================================================================== 00:08:46.819 Total : 12986.00 50.73 0.00 0.00 0.00 0.00 0.00 00:08:46.819 00:08:48.209 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.209 Nvme0n1 : 5.00 13056.00 51.00 0.00 0.00 0.00 0.00 0.00 00:08:48.209 =================================================================================================================== 00:08:48.209 Total : 13056.00 51.00 0.00 0.00 0.00 0.00 0.00 00:08:48.209 00:08:49.150 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.150 Nvme0n1 : 6.00 13113.17 51.22 0.00 0.00 0.00 0.00 0.00 00:08:49.150 =================================================================================================================== 00:08:49.150 Total : 13113.17 51.22 0.00 0.00 0.00 0.00 0.00 00:08:49.150 00:08:50.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.088 Nvme0n1 : 7.00 13172.00 51.45 0.00 0.00 0.00 0.00 0.00 00:08:50.089 =================================================================================================================== 00:08:50.089 Total : 13172.00 51.45 0.00 0.00 0.00 0.00 0.00 00:08:50.089 00:08:51.028 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.028 Nvme0n1 : 8.00 13192.38 51.53 0.00 0.00 0.00 0.00 0.00 00:08:51.028 =================================================================================================================== 00:08:51.028 Total : 13192.38 51.53 0.00 0.00 0.00 0.00 0.00 00:08:51.028 00:08:51.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.965 Nvme0n1 : 9.00 13236.44 51.70 0.00 0.00 0.00 0.00 0.00 00:08:51.965 =================================================================================================================== 00:08:51.965 Total : 13236.44 51.70 0.00 0.00 0.00 0.00 0.00 00:08:51.965 00:08:52.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.901 Nvme0n1 : 10.00 13246.30 51.74 0.00 0.00 0.00 0.00 0.00 00:08:52.901 =================================================================================================================== 00:08:52.901 Total : 13246.30 51.74 0.00 0.00 0.00 0.00 0.00 00:08:52.901 00:08:52.901 00:08:52.901 Latency(us) 00:08:52.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.901 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.901 Nvme0n1 : 10.01 13249.58 51.76 0.00 0.00 9655.55 3131.16 20680.25 00:08:52.901 =================================================================================================================== 00:08:52.901 Total : 13249.58 51.76 0.00 0.00 9655.55 3131.16 20680.25 00:08:52.901 0 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3673175 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3673175 ']' 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3673175 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3673175 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3673175' 00:08:52.901 killing process with pid 3673175 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3673175 00:08:52.901 Received shutdown signal, test time was about 10.000000 seconds 00:08:52.901 00:08:52.901 Latency(us) 00:08:52.901 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.901 =================================================================================================================== 00:08:52.901 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:52.901 08:54:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3673175 00:08:53.158 08:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.416 08:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:53.984 08:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:53.984 08:54:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3670064 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3670064 00:08:53.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3670064 Killed "${NVMF_APP[@]}" "$@" 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3674649 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3674649 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3674649 ']' 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:53.984 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.244 [2024-07-24 08:54:32.122419] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:08:54.244 [2024-07-24 08:54:32.122509] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.244 EAL: No free 2048 kB hugepages reported on node 1 00:08:54.244 [2024-07-24 08:54:32.170468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.244 [2024-07-24 08:54:32.202477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.244 [2024-07-24 08:54:32.296571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.244 [2024-07-24 08:54:32.296633] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.244 [2024-07-24 08:54:32.296650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.244 [2024-07-24 08:54:32.296664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.244 [2024-07-24 08:54:32.296676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.244 [2024-07-24 08:54:32.296707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.501 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:54.501 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:54.501 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:54.501 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:54.502 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.502 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.502 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.759 [2024-07-24 08:54:32.721050] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:54.759 [2024-07-24 08:54:32.721210] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:54.759 [2024-07-24 08:54:32.721263] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:54.759 08:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:55.018 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 265b0e2d-edf1-44ba-9287-1466f5c93e7e -t 2000 00:08:55.278 [ 00:08:55.278 { 00:08:55.278 "name": "265b0e2d-edf1-44ba-9287-1466f5c93e7e", 00:08:55.278 "aliases": [ 00:08:55.278 "lvs/lvol" 00:08:55.278 ], 00:08:55.278 "product_name": "Logical Volume", 00:08:55.278 "block_size": 4096, 00:08:55.278 "num_blocks": 38912, 00:08:55.278 "uuid": "265b0e2d-edf1-44ba-9287-1466f5c93e7e", 00:08:55.278 "assigned_rate_limits": { 00:08:55.278 "rw_ios_per_sec": 0, 00:08:55.278 "rw_mbytes_per_sec": 0, 00:08:55.278 "r_mbytes_per_sec": 0, 00:08:55.278 "w_mbytes_per_sec": 0 00:08:55.278 }, 00:08:55.278 "claimed": false, 00:08:55.278 "zoned": false, 00:08:55.278 "supported_io_types": { 00:08:55.278 "read": true, 00:08:55.278 "write": true, 00:08:55.278 "unmap": true, 00:08:55.278 "flush": false, 00:08:55.278 "reset": true, 00:08:55.278 "nvme_admin": false, 00:08:55.278 "nvme_io": false, 00:08:55.278 "nvme_io_md": false, 00:08:55.279 "write_zeroes": true, 00:08:55.279 "zcopy": false, 00:08:55.279 "get_zone_info": false, 00:08:55.279 "zone_management": false, 00:08:55.279 "zone_append": false, 00:08:55.279 "compare": false, 00:08:55.279 "compare_and_write": false, 00:08:55.279 "abort": false, 00:08:55.279 "seek_hole": true, 00:08:55.279 "seek_data": true, 00:08:55.279 "copy": false, 00:08:55.279 "nvme_iov_md": false 00:08:55.279 }, 00:08:55.279 "driver_specific": { 00:08:55.279 "lvol": { 00:08:55.279 "lvol_store_uuid": "87ba00d5-8fac-4714-91ca-a26a2eed8dbe", 00:08:55.279 "base_bdev": "aio_bdev", 00:08:55.279 "thin_provision": false, 00:08:55.279 "num_allocated_clusters": 38, 00:08:55.279 "snapshot": false, 00:08:55.279 "clone": false, 00:08:55.279 "esnap_clone": false 00:08:55.279 } 00:08:55.279 } 00:08:55.279 } 00:08:55.279 ] 00:08:55.279 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:55.279 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:55.279 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:55.536 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:55.536 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:55.536 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:55.795 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:55.795 08:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:56.053 [2024-07-24 08:54:34.062245] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:56.053 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:56.312 request: 00:08:56.312 { 00:08:56.312 "uuid": "87ba00d5-8fac-4714-91ca-a26a2eed8dbe", 00:08:56.312 "method": "bdev_lvol_get_lvstores", 00:08:56.312 "req_id": 1 00:08:56.312 } 00:08:56.312 Got JSON-RPC error response 00:08:56.312 response: 00:08:56.312 { 00:08:56.312 "code": -19, 00:08:56.312 "message": "No such device" 00:08:56.312 } 00:08:56.312 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:56.312 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:56.312 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:56.312 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:56.312 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.570 aio_bdev 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:56.570 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.829 08:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 265b0e2d-edf1-44ba-9287-1466f5c93e7e -t 2000 00:08:57.087 [ 00:08:57.087 { 00:08:57.087 "name": "265b0e2d-edf1-44ba-9287-1466f5c93e7e", 00:08:57.087 "aliases": [ 00:08:57.087 "lvs/lvol" 00:08:57.087 ], 00:08:57.087 "product_name": "Logical Volume", 00:08:57.087 "block_size": 4096, 00:08:57.087 "num_blocks": 38912, 00:08:57.087 "uuid": "265b0e2d-edf1-44ba-9287-1466f5c93e7e", 00:08:57.087 "assigned_rate_limits": { 00:08:57.087 "rw_ios_per_sec": 0, 00:08:57.087 "rw_mbytes_per_sec": 0, 00:08:57.087 "r_mbytes_per_sec": 0, 00:08:57.087 "w_mbytes_per_sec": 0 00:08:57.087 }, 00:08:57.087 "claimed": false, 00:08:57.087 "zoned": false, 00:08:57.087 "supported_io_types": { 00:08:57.087 "read": true, 00:08:57.087 "write": true, 00:08:57.087 "unmap": true, 00:08:57.087 "flush": false, 00:08:57.087 "reset": true, 00:08:57.087 "nvme_admin": false, 00:08:57.088 "nvme_io": false, 00:08:57.088 "nvme_io_md": false, 00:08:57.088 "write_zeroes": true, 00:08:57.088 "zcopy": false, 00:08:57.088 "get_zone_info": false, 00:08:57.088 "zone_management": false, 00:08:57.088 "zone_append": false, 00:08:57.088 "compare": false, 00:08:57.088 "compare_and_write": false, 00:08:57.088 "abort": false, 00:08:57.088 "seek_hole": true, 00:08:57.088 "seek_data": true, 00:08:57.088 "copy": false, 00:08:57.088 "nvme_iov_md": false 00:08:57.088 }, 00:08:57.088 "driver_specific": { 00:08:57.088 "lvol": { 00:08:57.088 "lvol_store_uuid": "87ba00d5-8fac-4714-91ca-a26a2eed8dbe", 00:08:57.088 "base_bdev": "aio_bdev", 00:08:57.088 "thin_provision": false, 00:08:57.088 "num_allocated_clusters": 38, 00:08:57.088 "snapshot": false, 00:08:57.088 "clone": false, 00:08:57.088 "esnap_clone": false 00:08:57.088 } 00:08:57.088 } 00:08:57.088 } 00:08:57.088 ] 00:08:57.088 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:57.088 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:57.088 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:57.347 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:57.347 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:57.347 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:57.610 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:57.610 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 265b0e2d-edf1-44ba-9287-1466f5c93e7e 00:08:57.869 08:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 87ba00d5-8fac-4714-91ca-a26a2eed8dbe 00:08:58.128 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:58.387 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.387 00:08:58.387 real 0m19.208s 00:08:58.387 user 0m43.804s 00:08:58.387 sys 0m6.561s 00:08:58.387 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:58.387 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:58.387 ************************************ 00:08:58.387 END TEST lvs_grow_dirty 00:08:58.387 ************************************ 00:08:58.387 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:58.387 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:58.388 nvmf_trace.0 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:58.388 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:58.388 rmmod nvme_tcp 00:08:58.388 rmmod nvme_fabrics 00:08:58.648 rmmod nvme_keyring 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3674649 ']' 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3674649 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3674649 ']' 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3674649 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3674649 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3674649' 00:08:58.648 killing process with pid 3674649 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3674649 00:08:58.648 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3674649 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.907 08:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:00.817 00:09:00.817 real 0m41.610s 00:09:00.817 user 1m5.367s 00:09:00.817 sys 0m10.635s 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.817 ************************************ 00:09:00.817 END TEST nvmf_lvs_grow 00:09:00.817 ************************************ 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.817 ************************************ 00:09:00.817 START TEST nvmf_bdev_io_wait 00:09:00.817 ************************************ 00:09:00.817 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:01.076 * Looking for test storage... 00:09:01.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:01.076 08:54:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:02.980 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:02.980 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:02.980 Found net devices under 0000:09:00.0: cvl_0_0 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:02.980 Found net devices under 0000:09:00.1: cvl_0_1 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:02.980 08:54:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:02.980 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:02.980 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:02.980 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:02.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:02.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:09:02.980 00:09:02.981 --- 10.0.0.2 ping statistics --- 00:09:02.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.981 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:02.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:02.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:09:02.981 00:09:02.981 --- 10.0.0.1 ping statistics --- 00:09:02.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:02.981 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3677172 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3677172 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3677172 ']' 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:02.981 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.240 [2024-07-24 08:54:41.128969] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:03.240 [2024-07-24 08:54:41.129057] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.240 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.240 [2024-07-24 08:54:41.177130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:03.240 [2024-07-24 08:54:41.207261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:03.240 [2024-07-24 08:54:41.305604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.240 [2024-07-24 08:54:41.305664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.240 [2024-07-24 08:54:41.305680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.240 [2024-07-24 08:54:41.305693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.240 [2024-07-24 08:54:41.305705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.240 [2024-07-24 08:54:41.305762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.240 [2024-07-24 08:54:41.305812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:03.240 [2024-07-24 08:54:41.305927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:03.240 [2024-07-24 08:54:41.305930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 [2024-07-24 08:54:41.480273] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 Malloc0 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:03.499 [2024-07-24 08:54:41.538868] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3677306 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3677309 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.499 { 00:09:03.499 "params": { 00:09:03.499 "name": "Nvme$subsystem", 00:09:03.499 "trtype": "$TEST_TRANSPORT", 00:09:03.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.499 "adrfam": "ipv4", 00:09:03.499 "trsvcid": "$NVMF_PORT", 00:09:03.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.499 "hdgst": ${hdgst:-false}, 00:09:03.499 "ddgst": ${ddgst:-false} 00:09:03.499 }, 00:09:03.499 "method": "bdev_nvme_attach_controller" 00:09:03.499 } 00:09:03.499 EOF 00:09:03.499 )") 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3677312 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.499 { 00:09:03.499 "params": { 00:09:03.499 "name": "Nvme$subsystem", 00:09:03.499 "trtype": "$TEST_TRANSPORT", 00:09:03.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.499 "adrfam": "ipv4", 00:09:03.499 "trsvcid": "$NVMF_PORT", 00:09:03.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.499 "hdgst": ${hdgst:-false}, 00:09:03.499 "ddgst": ${ddgst:-false} 00:09:03.499 }, 00:09:03.499 "method": "bdev_nvme_attach_controller" 00:09:03.499 } 00:09:03.499 EOF 00:09:03.499 )") 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3677316 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.499 { 00:09:03.499 "params": { 00:09:03.499 "name": "Nvme$subsystem", 00:09:03.499 "trtype": "$TEST_TRANSPORT", 00:09:03.499 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.499 "adrfam": "ipv4", 00:09:03.499 "trsvcid": "$NVMF_PORT", 00:09:03.499 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.499 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.499 "hdgst": ${hdgst:-false}, 00:09:03.499 "ddgst": ${ddgst:-false} 00:09:03.499 }, 00:09:03.499 "method": "bdev_nvme_attach_controller" 00:09:03.499 } 00:09:03.499 EOF 00:09:03.499 )") 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:03.499 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:03.499 { 00:09:03.499 "params": { 00:09:03.500 "name": "Nvme$subsystem", 00:09:03.500 "trtype": "$TEST_TRANSPORT", 00:09:03.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:03.500 "adrfam": "ipv4", 00:09:03.500 "trsvcid": "$NVMF_PORT", 00:09:03.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:03.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:03.500 "hdgst": ${hdgst:-false}, 00:09:03.500 "ddgst": ${ddgst:-false} 00:09:03.500 }, 00:09:03.500 "method": "bdev_nvme_attach_controller" 00:09:03.500 } 00:09:03.500 EOF 00:09:03.500 )") 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3677306 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.500 "params": { 00:09:03.500 "name": "Nvme1", 00:09:03.500 "trtype": "tcp", 00:09:03.500 "traddr": "10.0.0.2", 00:09:03.500 "adrfam": "ipv4", 00:09:03.500 "trsvcid": "4420", 00:09:03.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.500 "hdgst": false, 00:09:03.500 "ddgst": false 00:09:03.500 }, 00:09:03.500 "method": "bdev_nvme_attach_controller" 00:09:03.500 }' 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.500 "params": { 00:09:03.500 "name": "Nvme1", 00:09:03.500 "trtype": "tcp", 00:09:03.500 "traddr": "10.0.0.2", 00:09:03.500 "adrfam": "ipv4", 00:09:03.500 "trsvcid": "4420", 00:09:03.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.500 "hdgst": false, 00:09:03.500 "ddgst": false 00:09:03.500 }, 00:09:03.500 "method": "bdev_nvme_attach_controller" 00:09:03.500 }' 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.500 "params": { 00:09:03.500 "name": "Nvme1", 00:09:03.500 "trtype": "tcp", 00:09:03.500 "traddr": "10.0.0.2", 00:09:03.500 "adrfam": "ipv4", 00:09:03.500 "trsvcid": "4420", 00:09:03.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.500 "hdgst": false, 00:09:03.500 "ddgst": false 00:09:03.500 }, 00:09:03.500 "method": "bdev_nvme_attach_controller" 00:09:03.500 }' 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:03.500 08:54:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:03.500 "params": { 00:09:03.500 "name": "Nvme1", 00:09:03.500 "trtype": "tcp", 00:09:03.500 "traddr": "10.0.0.2", 00:09:03.500 "adrfam": "ipv4", 00:09:03.500 "trsvcid": "4420", 00:09:03.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:03.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:03.500 "hdgst": false, 00:09:03.500 "ddgst": false 00:09:03.500 }, 00:09:03.500 "method": "bdev_nvme_attach_controller" 00:09:03.500 }' 00:09:03.500 [2024-07-24 08:54:41.586232] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:03.500 [2024-07-24 08:54:41.586231] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:03.500 [2024-07-24 08:54:41.586232] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:03.500 [2024-07-24 08:54:41.586323] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 08:54:41.586323] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 08:54:41.586323] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:03.500 --proc-type=auto ] 00:09:03.500 --proc-type=auto ] 00:09:03.500 [2024-07-24 08:54:41.586342] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:03.500 [2024-07-24 08:54:41.586411] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:03.758 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.758 [2024-07-24 08:54:41.737306] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:03.758 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.758 [2024-07-24 08:54:41.765250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.758 [2024-07-24 08:54:41.842418] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:03.758 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.758 [2024-07-24 08:54:41.845173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:03.758 [2024-07-24 08:54:41.872287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.017 [2024-07-24 08:54:41.943854] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.017 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.017 [2024-07-24 08:54:41.951321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:04.017 [2024-07-24 08:54:41.973669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.017 [2024-07-24 08:54:42.020033] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:04.017 [2024-07-24 08:54:42.049725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.017 [2024-07-24 08:54:42.054144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:04.017 [2024-07-24 08:54:42.116791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:04.275 Running I/O for 1 seconds... 00:09:04.275 Running I/O for 1 seconds... 00:09:04.275 Running I/O for 1 seconds... 00:09:04.275 Running I/O for 1 seconds... 00:09:05.212 00:09:05.212 Latency(us) 00:09:05.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.212 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:05.212 Nvme1n1 : 1.01 11583.74 45.25 0.00 0.00 11008.12 6407.96 23204.60 00:09:05.212 =================================================================================================================== 00:09:05.212 Total : 11583.74 45.25 0.00 0.00 11008.12 6407.96 23204.60 00:09:05.212 00:09:05.212 Latency(us) 00:09:05.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.212 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:05.212 Nvme1n1 : 1.02 6098.18 23.82 0.00 0.00 20831.49 11602.30 26602.76 00:09:05.212 =================================================================================================================== 00:09:05.213 Total : 6098.18 23.82 0.00 0.00 20831.49 11602.30 26602.76 00:09:05.213 00:09:05.213 Latency(us) 00:09:05.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.213 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:05.213 Nvme1n1 : 1.00 197249.67 770.51 0.00 0.00 646.32 267.00 892.02 00:09:05.213 =================================================================================================================== 00:09:05.213 Total : 197249.67 770.51 0.00 0.00 646.32 267.00 892.02 00:09:05.472 00:09:05.472 Latency(us) 00:09:05.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.472 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:05.472 Nvme1n1 : 1.01 5022.28 19.62 0.00 0.00 25334.27 6310.87 51652.08 00:09:05.472 =================================================================================================================== 00:09:05.472 Total : 5022.28 19.62 0.00 0.00 25334.27 6310.87 51652.08 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3677309 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3677312 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3677316 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.731 rmmod nvme_tcp 00:09:05.731 rmmod nvme_fabrics 00:09:05.731 rmmod nvme_keyring 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3677172 ']' 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3677172 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3677172 ']' 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3677172 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3677172 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3677172' 00:09:05.731 killing process with pid 3677172 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3677172 00:09:05.731 08:54:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3677172 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.990 08:54:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:08.522 00:09:08.522 real 0m7.160s 00:09:08.522 user 0m15.628s 00:09:08.522 sys 0m3.513s 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 ************************************ 00:09:08.522 END TEST nvmf_bdev_io_wait 00:09:08.522 ************************************ 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.522 ************************************ 00:09:08.522 START TEST nvmf_queue_depth 00:09:08.522 ************************************ 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:08.522 * Looking for test storage... 00:09:08.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.522 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.523 08:54:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:10.426 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:10.426 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:10.426 Found net devices under 0000:09:00.0: cvl_0_0 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:10.426 Found net devices under 0000:09:00.1: cvl_0_1 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:10.426 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:10.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:10.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:09:10.427 00:09:10.427 --- 10.0.0.2 ping statistics --- 00:09:10.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.427 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:10.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:10.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:09:10.427 00:09:10.427 --- 10.0.0.1 ping statistics --- 00:09:10.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:10.427 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3679427 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3679427 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3679427 ']' 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:10.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.427 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.427 [2024-07-24 08:54:48.310078] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:10.427 [2024-07-24 08:54:48.310187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:10.427 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.427 [2024-07-24 08:54:48.346665] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:10.427 [2024-07-24 08:54:48.379222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.427 [2024-07-24 08:54:48.470767] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:10.427 [2024-07-24 08:54:48.470830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:10.427 [2024-07-24 08:54:48.470847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:10.427 [2024-07-24 08:54:48.470861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:10.427 [2024-07-24 08:54:48.470873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:10.427 [2024-07-24 08:54:48.470901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.688 [2024-07-24 08:54:48.621473] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.688 Malloc0 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.688 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.689 [2024-07-24 08:54:48.681707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3679562 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3679562 /var/tmp/bdevperf.sock 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3679562 ']' 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:10.689 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:10.689 [2024-07-24 08:54:48.727541] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:10.689 [2024-07-24 08:54:48.727614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3679562 ] 00:09:10.689 EAL: No free 2048 kB hugepages reported on node 1 00:09:10.689 [2024-07-24 08:54:48.759149] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:10.689 [2024-07-24 08:54:48.789194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.949 [2024-07-24 08:54:48.877207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.949 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:10.949 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:10.949 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:10.949 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.949 08:54:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:11.207 NVMe0n1 00:09:11.207 08:54:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.207 08:54:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:11.207 Running I/O for 10 seconds... 00:09:23.404 00:09:23.404 Latency(us) 00:09:23.404 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.404 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:23.404 Verification LBA range: start 0x0 length 0x4000 00:09:23.404 NVMe0n1 : 10.11 8090.69 31.60 0.00 0.00 126017.82 24369.68 76895.57 00:09:23.404 =================================================================================================================== 00:09:23.404 Total : 8090.69 31.60 0.00 0.00 126017.82 24369.68 76895.57 00:09:23.404 0 00:09:23.404 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3679562 00:09:23.404 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3679562 ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3679562 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3679562 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3679562' 00:09:23.405 killing process with pid 3679562 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3679562 00:09:23.405 Received shutdown signal, test time was about 10.000000 seconds 00:09:23.405 00:09:23.405 Latency(us) 00:09:23.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.405 =================================================================================================================== 00:09:23.405 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3679562 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:23.405 rmmod nvme_tcp 00:09:23.405 rmmod nvme_fabrics 00:09:23.405 rmmod nvme_keyring 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3679427 ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3679427 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3679427 ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3679427 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3679427 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3679427' 00:09:23.405 killing process with pid 3679427 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3679427 00:09:23.405 08:54:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3679427 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.405 08:55:00 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.341 00:09:24.341 real 0m16.009s 00:09:24.341 user 0m21.813s 00:09:24.341 sys 0m3.405s 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.341 ************************************ 00:09:24.341 END TEST nvmf_queue_depth 00:09:24.341 ************************************ 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.341 08:55:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:24.341 ************************************ 00:09:24.341 START TEST nvmf_target_multipath 00:09:24.341 ************************************ 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:24.342 * Looking for test storage... 00:09:24.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.342 08:55:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:26.245 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:26.245 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.245 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:26.246 Found net devices under 0000:09:00.0: cvl_0_0 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:26.246 Found net devices under 0000:09:00.1: cvl_0_1 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:09:26.246 00:09:26.246 --- 10.0.0.2 ping statistics --- 00:09:26.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.246 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:09:26.246 00:09:26.246 --- 10.0.0.1 ping statistics --- 00:09:26.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.246 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:26.246 only one NIC for nvmf test 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.246 rmmod nvme_tcp 00:09:26.246 rmmod nvme_fabrics 00:09:26.246 rmmod nvme_keyring 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.246 08:55:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.781 00:09:28.781 real 0m4.142s 00:09:28.781 user 0m0.744s 00:09:28.781 sys 0m1.380s 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.781 ************************************ 00:09:28.781 END TEST nvmf_target_multipath 00:09:28.781 ************************************ 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:28.781 08:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.781 ************************************ 00:09:28.782 START TEST nvmf_zcopy 00:09:28.782 ************************************ 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:28.782 * Looking for test storage... 00:09:28.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:28.782 08:55:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:30.685 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:30.685 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:30.685 Found net devices under 0000:09:00.0: cvl_0_0 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:30.685 Found net devices under 0000:09:00.1: cvl_0_1 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:09:30.685 00:09:30.685 --- 10.0.0.2 ping statistics --- 00:09:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.685 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:09:30.685 00:09:30.685 --- 10.0.0.1 ping statistics --- 00:09:30.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.685 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.685 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3684634 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3684634 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3684634 ']' 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.686 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.686 [2024-07-24 08:55:08.682793] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:30.686 [2024-07-24 08:55:08.682880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.686 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.686 [2024-07-24 08:55:08.722057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:30.686 [2024-07-24 08:55:08.748922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.945 [2024-07-24 08:55:08.835073] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.945 [2024-07-24 08:55:08.835146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.945 [2024-07-24 08:55:08.835161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.945 [2024-07-24 08:55:08.835173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.945 [2024-07-24 08:55:08.835197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.945 [2024-07-24 08:55:08.835224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 [2024-07-24 08:55:08.977854] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 [2024-07-24 08:55:08.994090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.945 08:55:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 malloc0 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:30.945 { 00:09:30.945 "params": { 00:09:30.945 "name": "Nvme$subsystem", 00:09:30.945 "trtype": "$TEST_TRANSPORT", 00:09:30.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:30.945 "adrfam": "ipv4", 00:09:30.945 "trsvcid": "$NVMF_PORT", 00:09:30.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:30.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:30.945 "hdgst": ${hdgst:-false}, 00:09:30.945 "ddgst": ${ddgst:-false} 00:09:30.945 }, 00:09:30.945 "method": "bdev_nvme_attach_controller" 00:09:30.945 } 00:09:30.945 EOF 00:09:30.945 )") 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:30.945 08:55:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:30.945 "params": { 00:09:30.945 "name": "Nvme1", 00:09:30.945 "trtype": "tcp", 00:09:30.945 "traddr": "10.0.0.2", 00:09:30.945 "adrfam": "ipv4", 00:09:30.945 "trsvcid": "4420", 00:09:30.945 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:30.945 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:30.945 "hdgst": false, 00:09:30.945 "ddgst": false 00:09:30.945 }, 00:09:30.945 "method": "bdev_nvme_attach_controller" 00:09:30.945 }' 00:09:31.204 [2024-07-24 08:55:09.088942] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:31.204 [2024-07-24 08:55:09.089034] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3684770 ] 00:09:31.204 EAL: No free 2048 kB hugepages reported on node 1 00:09:31.204 [2024-07-24 08:55:09.128825] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:31.204 [2024-07-24 08:55:09.160496] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.204 [2024-07-24 08:55:09.251956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.783 Running I/O for 10 seconds... 00:09:41.798 00:09:41.798 Latency(us) 00:09:41.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.798 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:41.798 Verification LBA range: start 0x0 length 0x1000 00:09:41.798 Nvme1n1 : 10.01 5667.97 44.28 0.00 0.00 22495.94 1711.22 33593.27 00:09:41.798 =================================================================================================================== 00:09:41.798 Total : 5667.97 44.28 0.00 0.00 22495.94 1711.22 33593.27 00:09:41.798 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3685972 00:09:41.798 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:41.798 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:41.798 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:41.798 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:41.799 { 00:09:41.799 "params": { 00:09:41.799 "name": "Nvme$subsystem", 00:09:41.799 "trtype": "$TEST_TRANSPORT", 00:09:41.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:41.799 "adrfam": "ipv4", 00:09:41.799 "trsvcid": "$NVMF_PORT", 00:09:41.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:41.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:41.799 "hdgst": ${hdgst:-false}, 00:09:41.799 "ddgst": ${ddgst:-false} 00:09:41.799 }, 00:09:41.799 "method": "bdev_nvme_attach_controller" 00:09:41.799 } 00:09:41.799 EOF 00:09:41.799 )") 00:09:41.799 [2024-07-24 08:55:19.878320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.799 [2024-07-24 08:55:19.878361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:41.799 08:55:19 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:41.799 "params": { 00:09:41.799 "name": "Nvme1", 00:09:41.799 "trtype": "tcp", 00:09:41.799 "traddr": "10.0.0.2", 00:09:41.799 "adrfam": "ipv4", 00:09:41.799 "trsvcid": "4420", 00:09:41.799 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:41.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:41.799 "hdgst": false, 00:09:41.799 "ddgst": false 00:09:41.799 }, 00:09:41.799 "method": "bdev_nvme_attach_controller" 00:09:41.799 }' 00:09:41.799 [2024-07-24 08:55:19.886295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.799 [2024-07-24 08:55:19.886319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.799 [2024-07-24 08:55:19.894312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.799 [2024-07-24 08:55:19.894334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.799 [2024-07-24 08:55:19.902333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.799 [2024-07-24 08:55:19.902355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.799 [2024-07-24 08:55:19.910353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.799 [2024-07-24 08:55:19.910373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.918401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.918429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.920484] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:42.057 [2024-07-24 08:55:19.920554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3685972 ] 00:09:42.057 [2024-07-24 08:55:19.926417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.926443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.934447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.934472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.942480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.942505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.057 [2024-07-24 08:55:19.950492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.950516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.953051] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:42.057 [2024-07-24 08:55:19.958513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.958537] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.966534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.966558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.974554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.974578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.982579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.982603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.984402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.057 [2024-07-24 08:55:19.990624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.990657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:19.998644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:19.998680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.006663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.006696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.014670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.014697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.022691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.022716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.030713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.030738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.038759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.038799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.046761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.046789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.054771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.054793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.062808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.062838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.070822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.070848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.076575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.057 [2024-07-24 08:55:20.078843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.078868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.086864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.086889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.094921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.094956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.102938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.102974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.110956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.110994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.118980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.119032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.127004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.127042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.135025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.135063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.143036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.143062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.151061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.151124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.159116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.159167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.057 [2024-07-24 08:55:20.167127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.057 [2024-07-24 08:55:20.167176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.175160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.175187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.183145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.183185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.191191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.191218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.199198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.199223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.207209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.207233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.215238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.215261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.223246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.223267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.231266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.231287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.239287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.315 [2024-07-24 08:55:20.239308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.315 [2024-07-24 08:55:20.247310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.247330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.255356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.255393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.263364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.263403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.271428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.271457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.279429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.279454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.288635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.288665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.295495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.295522] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 Running I/O for 5 seconds... 00:09:42.316 [2024-07-24 08:55:20.303528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.303554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.318332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.318361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.329879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.329908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.342934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.342962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.353534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.353561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.364592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.364620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.377246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.377273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.387550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.387577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.398421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.398448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.409440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.409467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.316 [2024-07-24 08:55:20.420363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.316 [2024-07-24 08:55:20.420390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.433495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.433523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.444179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.444207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.454809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.454836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.465659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.465687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.476296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.476323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.487213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.487241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.498085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.498120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.510844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.510873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.520940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.520967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.531779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.531807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.542284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.542312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.552963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.552990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.565369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.565396] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.575751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.575779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.586258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.586285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.599409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.599437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.609327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.609354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.619829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.619856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.630516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.630544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.640857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.640884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.651118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.651145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.661804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.661832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.674411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.674439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.575 [2024-07-24 08:55:20.687288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.575 [2024-07-24 08:55:20.687321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.697893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.697921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.708690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.708717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.721534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.721562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.733373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.733401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.742289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.742316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.753641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.753669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.766591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.766618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.775900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.775927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.787046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.787088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.799394] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.799421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.809030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.809057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.820295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.820322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.831299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.831326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.841878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.841905] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.854179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.854206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.863641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.863668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.873997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.874024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.884496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.884524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.895161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.895188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.905633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.905660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.916219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.916246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.928795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.928834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.938454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.938481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.834 [2024-07-24 08:55:20.949516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.834 [2024-07-24 08:55:20.949543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:20.962384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:20.962411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:20.972190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:20.972217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:20.982574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:20.982601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:20.993400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:20.993427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.004189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.004216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.014596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.014624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.027478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.027505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.037514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.037541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.048283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.048311] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.058219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.058247] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.068726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.068754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.079162] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.079190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.091776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.091806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.101785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.101813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.092 [2024-07-24 08:55:21.112201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.092 [2024-07-24 08:55:21.112229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.122703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.122730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.132957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.132991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.143736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.143763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.156539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.156567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.166266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.166294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.177268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.177296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.189506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.189534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.093 [2024-07-24 08:55:21.199523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.093 [2024-07-24 08:55:21.199550] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.210573] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.210603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.221165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.221195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.231898] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.231926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.244290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.244317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.254172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.254200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.264899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.264926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.276124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.276164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.288853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.288881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.299099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.299145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.309240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.309268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.319848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.319875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.331987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.332015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.341910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.341946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.352287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.352328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.362847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.362875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.373880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.373907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.386475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.386503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.396794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.396821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.407326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.407353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.418115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.418143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.428400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.428427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.351 [2024-07-24 08:55:21.439343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.351 [2024-07-24 08:55:21.439370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.352 [2024-07-24 08:55:21.450232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.352 [2024-07-24 08:55:21.450259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.352 [2024-07-24 08:55:21.462519] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.352 [2024-07-24 08:55:21.462547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.472986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.473014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.483020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.483048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.493228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.493256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.503655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.503682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.513837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.513865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.524924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.524951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.537242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.537270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.547448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.547483] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.558501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.558528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.571007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.571035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.581343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.581370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.592128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.592155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.602480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.602508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.612798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.612825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.623640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.623667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.636173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.636200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.646614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.646641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.657500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.657527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.670078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.670115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.680072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.680100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.690640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.690667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.703015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.703043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.711950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.711977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.610 [2024-07-24 08:55:21.723450] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.610 [2024-07-24 08:55:21.723479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.734387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.734415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.745197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.745224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.757922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.757950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.767869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.767896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.778842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.778870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.791417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.791448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.800945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.800973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.811918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.811945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.822794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.822822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.833590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.833617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.844750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.844777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.857303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.857330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.867093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.867128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.868 [2024-07-24 08:55:21.877817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.868 [2024-07-24 08:55:21.877844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.888805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.888832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.901323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.901351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.910964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.910992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.921347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.921374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.932170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.932198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.944347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.944374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.954065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.954093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.965406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.965433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.869 [2024-07-24 08:55:21.977691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.869 [2024-07-24 08:55:21.977718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:21.988154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:21.988183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:21.999693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:21.999721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.010260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.010288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.020855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.020883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.031486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.031513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.041897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.041924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.052251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.052278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.062764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.062792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.073283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.073310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.083910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.083937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.094307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.094334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.104975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.105002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.115807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.115835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.128474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.128501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.138171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.138199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.149066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.149093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.161464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.161491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.170808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.170835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.182094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.182148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.192774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.192801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.203329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.203356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.213769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.213796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.224491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.224518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.127 [2024-07-24 08:55:22.235368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.127 [2024-07-24 08:55:22.235395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.248868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.248896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.259062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.259090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.269788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.269815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.280060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.280087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.291083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.291121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.303734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.303763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.314250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.314277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.325313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.325341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.336183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.336211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.347128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.347168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.385 [2024-07-24 08:55:22.359670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.385 [2024-07-24 08:55:22.359698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.371289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.371317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.381338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.381366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.392487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.392516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.403308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.403336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.414127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.414154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.425100] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.425137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.435646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.435673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.446440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.446467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.459298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.459326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.468981] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.469010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.479807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.479834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.386 [2024-07-24 08:55:22.492498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.386 [2024-07-24 08:55:22.492525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.503096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.503133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.514361] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.514389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.527341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.527379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.537738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.537765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.548564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.548591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.561181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.561208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.571190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.571217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.582008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.582043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.594539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.594567] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.604677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.604704] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.614891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.644 [2024-07-24 08:55:22.614918] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.644 [2024-07-24 08:55:22.624952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.624979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.635462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.635489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.645814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.645841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.656077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.656112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.666353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.666380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.676799] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.676826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.687557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.687584] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.698496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.698523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.711175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.711203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.721552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.721579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.732403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.732430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.744908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.744936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.645 [2024-07-24 08:55:22.754769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.645 [2024-07-24 08:55:22.754796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.903 [2024-07-24 08:55:22.766153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.903 [2024-07-24 08:55:22.766182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.903 [2024-07-24 08:55:22.779307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.903 [2024-07-24 08:55:22.779334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.903 [2024-07-24 08:55:22.789124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.903 [2024-07-24 08:55:22.789160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.903 [2024-07-24 08:55:22.800140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.903 [2024-07-24 08:55:22.800167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.812558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.812585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.822387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.822414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.833732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.833760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.844674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.844701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.855650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.855677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.866676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.866718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.877607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.877634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.890825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.890853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.901353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.901381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.912215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.912256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.923157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.923184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.934327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.934354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.946687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.946714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.956211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.956239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.968910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.968937] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.979739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.979767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:22.990852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:22.990879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:23.003561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:23.003596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.904 [2024-07-24 08:55:23.014018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.904 [2024-07-24 08:55:23.014062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.025773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.025801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.037208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.037235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.050638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.050665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.061053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.061081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.071409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.071436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.081750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.081777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.092788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.092816] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.105186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.105213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.114980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.115008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.126295] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.126323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.136528] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.136555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.147253] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.147281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.160351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.160379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.170034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.170061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.181421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.181449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.192090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.192126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.202607] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.202634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.162 [2024-07-24 08:55:23.213746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.162 [2024-07-24 08:55:23.213781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.163 [2024-07-24 08:55:23.224501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.163 [2024-07-24 08:55:23.224529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.163 [2024-07-24 08:55:23.236957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.163 [2024-07-24 08:55:23.236985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.163 [2024-07-24 08:55:23.246852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.163 [2024-07-24 08:55:23.246879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.163 [2024-07-24 08:55:23.257912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.163 [2024-07-24 08:55:23.257939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.163 [2024-07-24 08:55:23.269061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.163 [2024-07-24 08:55:23.269088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.280398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.280431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.290989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.291016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.301586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.301614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.312194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.312223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.323282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.323309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.335742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.335769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.346422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.346449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.357227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.357254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.367768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.367795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.378932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.378959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.390222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.390250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.401136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.401163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.411507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.411534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.422064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.422091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.434745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.434772] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.444915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.444942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.455486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.455513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.466307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.466335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.476895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.476923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.489781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.489809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.500306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.500334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.511216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.511243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.524071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.524099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.430 [2024-07-24 08:55:23.536038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.430 [2024-07-24 08:55:23.536066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.545608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.545637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.557266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.557294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.568117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.568145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.580850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.580879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.591262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.591290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.602479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.602506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.613492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.613519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.623664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.623691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.634252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.634281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.644892] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.644920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.655705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.655732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.668311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.668339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.678147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.678174] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.688806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.688833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.701525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.701553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.711998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.712025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.722945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.722973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.736042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.736069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.746139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.746166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.756918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.756946] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.767769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.767795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.778172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.778199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.788686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.788713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.688 [2024-07-24 08:55:23.799222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.688 [2024-07-24 08:55:23.799249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.812060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.812089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.821396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.821423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.833016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.833044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.843419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.843447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.854286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.854313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.864551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.864578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.875115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.875142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.885747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.885790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.898207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.898235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.910174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.910201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.919088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.919124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.930632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.930659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.943655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.943683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.953977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.954004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.965066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.965094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.977856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.977883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.987611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.987638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:23.998215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:23.998242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:24.010438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:24.010466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:24.020273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:24.020302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:24.031233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:24.031261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:24.043466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:24.043493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.946 [2024-07-24 08:55:24.052911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.946 [2024-07-24 08:55:24.052954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.064706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.064734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.077715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.077744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.088085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.088120] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.098871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.098898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.109221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.109249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.119908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.119935] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.130297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.130324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.141233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.141261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.154177] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.154204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.204 [2024-07-24 08:55:24.164520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.204 [2024-07-24 08:55:24.164547] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.175560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.175587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.188313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.188340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.198531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.198558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.209195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.209222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.220602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.220629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.233458] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.233486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.243359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.243387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.254469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.254504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.267435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.267462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.277797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.277824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.288443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.288472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.299637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.299663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.205 [2024-07-24 08:55:24.310549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.205 [2024-07-24 08:55:24.310577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.323687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.323718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.333658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.333687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.344641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.344669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.357487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.357514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.367612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.367639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.378667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.378694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.392017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.392045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.402542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.402568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.413089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.413125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.423849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.423876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.434578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.434605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.445580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.445607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.456449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.456476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.469250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.469286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.479078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.479114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.490428] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.490456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.503491] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.503517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.514239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.514266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.525089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.525125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.536221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.536248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.547224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.547251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.463 [2024-07-24 08:55:24.560601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.463 [2024-07-24 08:55:24.560628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.464 [2024-07-24 08:55:24.571042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.464 [2024-07-24 08:55:24.571070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.582008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.582037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.592832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.592861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.603689] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.603718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.616540] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.616569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.626624] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.626651] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.637759] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.637787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.650419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.650456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.661049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.661077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.671884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.671911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.684508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.684543] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.694954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.694981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.709566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.709596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.719355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.719383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.730511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.730539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.741216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.741243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.752230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.752258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.764823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.764850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.774633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.774661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.786121] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.786149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.798401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.798429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.808579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.808607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.819037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.819065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.722 [2024-07-24 08:55:24.829940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.722 [2024-07-24 08:55:24.829967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.841504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.841540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.852426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.852455] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.863684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.863712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.874364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.874392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.885525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.885553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.896372] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.896407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.909092] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.909128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.919271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.919298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.930035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.930062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.943050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.943076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.952884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.952911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.963440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.963467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.974172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.974199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.984468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.984495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:24.995018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:24.995045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.005744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.005771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.018512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.018540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.028753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.028781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.039498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.039525] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.052278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.052305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.062257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.062284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.073048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.073075] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.085326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.085353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.980 [2024-07-24 08:55:25.095158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.980 [2024-07-24 08:55:25.095186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.106777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.106812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.119435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.119463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.129596] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.129623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.140349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.140377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.153849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.153877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.164615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.164642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.175157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.175184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.186186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.186214] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.197430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.197457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.208089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.208123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.218349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.218377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.228616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.228644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.238988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.239016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.249497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.249524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.260235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.260263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.270772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.270799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.280851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.280878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.291217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.291244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.301749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.301777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.312300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.312328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.321380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.321407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 00:09:47.239 Latency(us) 00:09:47.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.239 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:47.239 Nvme1n1 : 5.01 11823.98 92.37 0.00 0.00 10811.87 4369.07 23690.05 00:09:47.239 =================================================================================================================== 00:09:47.239 Total : 11823.98 92.37 0.00 0.00 10811.87 4369.07 23690.05 00:09:47.239 [2024-07-24 08:55:25.326718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.326744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.334718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.334743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.342738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.342773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.239 [2024-07-24 08:55:25.350803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.239 [2024-07-24 08:55:25.350849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.358804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.358848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.366815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.366857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.374830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.374873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.382854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.382894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.390876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.390915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.398902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.398942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.406917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.406956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.414948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.414989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.422972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.423015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.430990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.431030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.439007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.439047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.447025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.447065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.455047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.455087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.463069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.463117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.471068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.471093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.479095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.479131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.487145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.487188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.495175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.495218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.503190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.503221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.511193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.511216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.519225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.519266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.527250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.527292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.535257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.535286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.543259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.543280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.551281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.551302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 [2024-07-24 08:55:25.559302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.498 [2024-07-24 08:55:25.559323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3685972) - No such process 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3685972 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 delay0 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:47.499 08:55:25 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:47.757 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.757 [2024-07-24 08:55:25.723262] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.315 Initializing NVMe Controllers 00:09:54.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.315 Initialization complete. Launching workers. 00:09:54.315 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 765 00:09:54.315 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1052, failed to submit 33 00:09:54.315 success 873, unsuccess 179, failed 0 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:54.315 08:55:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:54.315 rmmod nvme_tcp 00:09:54.315 rmmod nvme_fabrics 00:09:54.315 rmmod nvme_keyring 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3684634 ']' 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3684634 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3684634 ']' 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3684634 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3684634 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3684634' 00:09:54.315 killing process with pid 3684634 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3684634 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3684634 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.315 08:55:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:56.848 00:09:56.848 real 0m28.045s 00:09:56.848 user 0m36.049s 00:09:56.848 sys 0m10.191s 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.848 ************************************ 00:09:56.848 END TEST nvmf_zcopy 00:09:56.848 ************************************ 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.848 ************************************ 00:09:56.848 START TEST nvmf_nmic 00:09:56.848 ************************************ 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.848 * Looking for test storage... 00:09:56.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.848 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:56.849 08:55:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:58.750 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:58.751 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:58.751 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:58.751 Found net devices under 0000:09:00.0: cvl_0_0 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:58.751 Found net devices under 0000:09:00.1: cvl_0_1 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:58.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:58.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:09:58.751 00:09:58.751 --- 10.0.0.2 ping statistics --- 00:09:58.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.751 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:58.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:58.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:09:58.751 00:09:58.751 --- 10.0.0.1 ping statistics --- 00:09:58.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:58.751 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3689363 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3689363 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3689363 ']' 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:58.751 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:58.751 [2024-07-24 08:55:36.645006] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:09:58.751 [2024-07-24 08:55:36.645096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.751 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.752 [2024-07-24 08:55:36.683009] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:58.752 [2024-07-24 08:55:36.714919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:58.752 [2024-07-24 08:55:36.809921] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:58.752 [2024-07-24 08:55:36.809983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:58.752 [2024-07-24 08:55:36.810000] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:58.752 [2024-07-24 08:55:36.810014] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:58.752 [2024-07-24 08:55:36.810025] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:58.752 [2024-07-24 08:55:36.810115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:58.752 [2024-07-24 08:55:36.810156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:58.752 [2024-07-24 08:55:36.810206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:58.752 [2024-07-24 08:55:36.810209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 [2024-07-24 08:55:36.963846] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 Malloc0 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 [2024-07-24 08:55:37.015077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:59.011 test case1: single bdev can't be used in multiple subsystems 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.011 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.011 [2024-07-24 08:55:37.038940] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:59.011 [2024-07-24 08:55:37.038967] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:59.011 [2024-07-24 08:55:37.038997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.011 request: 00:09:59.011 { 00:09:59.011 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:59.011 "namespace": { 00:09:59.011 "bdev_name": "Malloc0", 00:09:59.011 "no_auto_visible": false 00:09:59.011 }, 00:09:59.011 "method": "nvmf_subsystem_add_ns", 00:09:59.011 "req_id": 1 00:09:59.012 } 00:09:59.012 Got JSON-RPC error response 00:09:59.012 response: 00:09:59.012 { 00:09:59.012 "code": -32602, 00:09:59.012 "message": "Invalid parameters" 00:09:59.012 } 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:59.012 Adding namespace failed - expected result. 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:59.012 test case2: host connect to nvmf target in multiple paths 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:59.012 [2024-07-24 08:55:37.047052] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:59.012 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:59.945 08:55:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:00.521 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.521 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:10:00.521 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.521 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:00.521 08:55:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:10:02.418 08:55:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:02.418 [global] 00:10:02.418 thread=1 00:10:02.418 invalidate=1 00:10:02.418 rw=write 00:10:02.418 time_based=1 00:10:02.418 runtime=1 00:10:02.418 ioengine=libaio 00:10:02.418 direct=1 00:10:02.418 bs=4096 00:10:02.418 iodepth=1 00:10:02.418 norandommap=0 00:10:02.418 numjobs=1 00:10:02.418 00:10:02.418 verify_dump=1 00:10:02.418 verify_backlog=512 00:10:02.418 verify_state_save=0 00:10:02.418 do_verify=1 00:10:02.418 verify=crc32c-intel 00:10:02.418 [job0] 00:10:02.418 filename=/dev/nvme0n1 00:10:02.418 Could not set queue depth (nvme0n1) 00:10:02.676 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:02.676 fio-3.35 00:10:02.676 Starting 1 thread 00:10:04.092 00:10:04.092 job0: (groupid=0, jobs=1): err= 0: pid=3689997: Wed Jul 24 08:55:41 2024 00:10:04.092 read: IOPS=2008, BW=8032KiB/s (8225kB/s)(8032KiB/1000msec) 00:10:04.092 slat (nsec): min=4342, max=54694, avg=11940.41, stdev=7172.50 00:10:04.092 clat (usec): min=235, max=552, avg=286.27, stdev=39.44 00:10:04.092 lat (usec): min=240, max=585, avg=298.21, stdev=41.20 00:10:04.092 clat percentiles (usec): 00:10:04.092 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 253], 20.00th=[ 260], 00:10:04.092 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:04.092 | 70.00th=[ 289], 80.00th=[ 306], 90.00th=[ 343], 95.00th=[ 379], 00:10:04.092 | 99.00th=[ 441], 99.50th=[ 445], 99.90th=[ 519], 99.95th=[ 553], 00:10:04.092 | 99.99th=[ 553] 00:10:04.092 write: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec); 0 zone resets 00:10:04.092 slat (nsec): min=5468, max=36766, avg=11393.22, stdev=5543.11 00:10:04.092 clat (usec): min=153, max=383, avg=177.48, stdev=12.52 00:10:04.092 lat (usec): min=160, max=390, avg=188.87, stdev=14.74 00:10:04.092 clat percentiles (usec): 00:10:04.092 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 163], 20.00th=[ 167], 00:10:04.092 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:10:04.092 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 192], 95.00th=[ 196], 00:10:04.092 | 99.00th=[ 208], 99.50th=[ 215], 99.90th=[ 221], 99.95th=[ 347], 00:10:04.092 | 99.99th=[ 383] 00:10:04.092 bw ( KiB/s): min= 8272, max= 8272, per=100.00%, avg=8272.00, stdev= 0.00, samples=1 00:10:04.092 iops : min= 2068, max= 2068, avg=2068.00, stdev= 0.00, samples=1 00:10:04.092 lat (usec) : 250=53.97%, 500=45.93%, 750=0.10% 00:10:04.092 cpu : usr=3.20%, sys=4.40%, ctx=4056, majf=0, minf=2 00:10:04.092 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:04.092 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.093 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.093 issued rwts: total=2008,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.093 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:04.093 00:10:04.093 Run status group 0 (all jobs): 00:10:04.093 READ: bw=8032KiB/s (8225kB/s), 8032KiB/s-8032KiB/s (8225kB/s-8225kB/s), io=8032KiB (8225kB), run=1000-1000msec 00:10:04.093 WRITE: bw=8192KiB/s (8389kB/s), 8192KiB/s-8192KiB/s (8389kB/s-8389kB/s), io=8192KiB (8389kB), run=1000-1000msec 00:10:04.093 00:10:04.093 Disk stats (read/write): 00:10:04.093 nvme0n1: ios=1721/2048, merge=0/0, ticks=495/326, in_queue=821, util=91.68% 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:04.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:04.093 rmmod nvme_tcp 00:10:04.093 rmmod nvme_fabrics 00:10:04.093 rmmod nvme_keyring 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3689363 ']' 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3689363 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3689363 ']' 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3689363 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:04.093 08:55:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3689363 00:10:04.093 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:04.093 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:04.093 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3689363' 00:10:04.093 killing process with pid 3689363 00:10:04.093 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3689363 00:10:04.093 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3689363 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:04.353 08:55:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.259 00:10:06.259 real 0m9.860s 00:10:06.259 user 0m22.454s 00:10:06.259 sys 0m2.403s 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:06.259 ************************************ 00:10:06.259 END TEST nvmf_nmic 00:10:06.259 ************************************ 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.259 ************************************ 00:10:06.259 START TEST nvmf_fio_target 00:10:06.259 ************************************ 00:10:06.259 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:06.518 * Looking for test storage... 00:10:06.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:06.518 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.519 08:55:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.417 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:08.418 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:08.418 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:08.418 Found net devices under 0000:09:00.0: cvl_0_0 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:08.418 Found net devices under 0000:09:00.1: cvl_0_1 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:08.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:10:08.418 00:10:08.418 --- 10.0.0.2 ping statistics --- 00:10:08.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.418 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:10:08.418 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:10:08.419 00:10:08.419 --- 10.0.0.1 ping statistics --- 00:10:08.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.419 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3692073 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3692073 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3692073 ']' 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:08.419 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.419 [2024-07-24 08:55:46.460247] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:10:08.419 [2024-07-24 08:55:46.460325] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.419 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.419 [2024-07-24 08:55:46.498223] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:08.419 [2024-07-24 08:55:46.530239] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.676 [2024-07-24 08:55:46.625535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.676 [2024-07-24 08:55:46.625597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.676 [2024-07-24 08:55:46.625613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.676 [2024-07-24 08:55:46.625627] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.676 [2024-07-24 08:55:46.625639] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.676 [2024-07-24 08:55:46.625733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.676 [2024-07-24 08:55:46.625785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.676 [2024-07-24 08:55:46.625849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.676 [2024-07-24 08:55:46.625851] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.676 08:55:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:08.933 [2024-07-24 08:55:47.006358] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.933 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.191 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:09.191 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.756 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:09.756 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:09.756 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:09.756 08:55:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.013 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:10.013 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:10.270 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.527 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:10.527 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:10.785 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:10.785 08:55:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:11.042 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:11.042 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:11.298 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:11.555 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:11.555 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.811 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:11.811 08:55:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:12.068 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.325 [2024-07-24 08:55:50.363326] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.325 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:12.583 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:12.840 08:55:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.773 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:13.773 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:10:13.773 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.773 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:10:13.773 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:10:13.773 08:55:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:10:15.669 08:55:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:15.669 [global] 00:10:15.669 thread=1 00:10:15.669 invalidate=1 00:10:15.669 rw=write 00:10:15.669 time_based=1 00:10:15.669 runtime=1 00:10:15.669 ioengine=libaio 00:10:15.669 direct=1 00:10:15.669 bs=4096 00:10:15.669 iodepth=1 00:10:15.669 norandommap=0 00:10:15.669 numjobs=1 00:10:15.669 00:10:15.669 verify_dump=1 00:10:15.669 verify_backlog=512 00:10:15.669 verify_state_save=0 00:10:15.669 do_verify=1 00:10:15.669 verify=crc32c-intel 00:10:15.669 [job0] 00:10:15.669 filename=/dev/nvme0n1 00:10:15.669 [job1] 00:10:15.669 filename=/dev/nvme0n2 00:10:15.669 [job2] 00:10:15.669 filename=/dev/nvme0n3 00:10:15.669 [job3] 00:10:15.670 filename=/dev/nvme0n4 00:10:15.670 Could not set queue depth (nvme0n1) 00:10:15.670 Could not set queue depth (nvme0n2) 00:10:15.670 Could not set queue depth (nvme0n3) 00:10:15.670 Could not set queue depth (nvme0n4) 00:10:15.927 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.927 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.927 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.927 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:15.927 fio-3.35 00:10:15.927 Starting 4 threads 00:10:17.295 00:10:17.295 job0: (groupid=0, jobs=1): err= 0: pid=3693040: Wed Jul 24 08:55:55 2024 00:10:17.295 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:10:17.295 slat (nsec): min=10241, max=35317, avg=28389.64, stdev=8741.44 00:10:17.295 clat (usec): min=40903, max=41032, avg=40958.73, stdev=27.53 00:10:17.295 lat (usec): min=40937, max=41048, avg=40987.12, stdev=23.95 00:10:17.295 clat percentiles (usec): 00:10:17.295 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:17.295 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:17.295 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:17.295 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:17.295 | 99.99th=[41157] 00:10:17.295 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:10:17.295 slat (nsec): min=8692, max=73628, avg=18723.76, stdev=9201.76 00:10:17.295 clat (usec): min=172, max=434, avg=247.88, stdev=59.26 00:10:17.295 lat (usec): min=184, max=483, avg=266.60, stdev=63.29 00:10:17.295 clat percentiles (usec): 00:10:17.295 | 1.00th=[ 176], 5.00th=[ 182], 10.00th=[ 194], 20.00th=[ 204], 00:10:17.295 | 30.00th=[ 210], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 243], 00:10:17.295 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 334], 95.00th=[ 396], 00:10:17.295 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 433], 99.95th=[ 433], 00:10:17.295 | 99.99th=[ 433] 00:10:17.295 bw ( KiB/s): min= 4087, max= 4087, per=51.94%, avg=4087.00, stdev= 0.00, samples=1 00:10:17.295 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:17.295 lat (usec) : 250=62.55%, 500=33.33% 00:10:17.295 lat (msec) : 50=4.12% 00:10:17.295 cpu : usr=0.77%, sys=1.06%, ctx=535, majf=0, minf=2 00:10:17.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.295 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.295 job1: (groupid=0, jobs=1): err= 0: pid=3693041: Wed Jul 24 08:55:55 2024 00:10:17.295 read: IOPS=33, BW=135KiB/s (138kB/s)(136KiB/1011msec) 00:10:17.295 slat (nsec): min=8158, max=34100, avg=21848.97, stdev=11131.91 00:10:17.295 clat (usec): min=294, max=41301, avg=25920.38, stdev=19602.27 00:10:17.295 lat (usec): min=309, max=41311, avg=25942.23, stdev=19611.09 00:10:17.295 clat percentiles (usec): 00:10:17.295 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 326], 20.00th=[ 330], 00:10:17.295 | 30.00th=[ 367], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:10:17.295 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:17.295 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:17.295 | 99.99th=[41157] 00:10:17.295 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:10:17.295 slat (nsec): min=8581, max=53297, avg=17904.17, stdev=7329.00 00:10:17.295 clat (usec): min=184, max=457, avg=228.85, stdev=24.50 00:10:17.295 lat (usec): min=200, max=479, avg=246.76, stdev=28.00 00:10:17.295 clat percentiles (usec): 00:10:17.295 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 208], 00:10:17.295 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 233], 00:10:17.295 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 265], 00:10:17.295 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 457], 99.95th=[ 457], 00:10:17.295 | 99.99th=[ 457] 00:10:17.295 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:10:17.295 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:17.295 lat (usec) : 250=76.19%, 500=19.78% 00:10:17.295 lat (msec) : 20=0.18%, 50=3.85% 00:10:17.295 cpu : usr=0.89%, sys=0.89%, ctx=547, majf=0, minf=1 00:10:17.295 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.295 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.295 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.295 job2: (groupid=0, jobs=1): err= 0: pid=3693042: Wed Jul 24 08:55:55 2024 00:10:17.295 read: IOPS=20, BW=83.4KiB/s (85.4kB/s)(84.0KiB/1007msec) 00:10:17.295 slat (nsec): min=8701, max=37655, avg=26886.29, stdev=8621.22 00:10:17.295 clat (usec): min=40913, max=42027, avg=41132.20, stdev=370.37 00:10:17.295 lat (usec): min=40945, max=42045, avg=41159.09, stdev=365.07 00:10:17.295 clat percentiles (usec): 00:10:17.295 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:17.295 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:17.295 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:10:17.295 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:17.295 | 99.99th=[42206] 00:10:17.296 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:17.296 slat (nsec): min=7437, max=60367, avg=16017.96, stdev=9898.93 00:10:17.296 clat (usec): min=160, max=537, avg=258.10, stdev=74.67 00:10:17.296 lat (usec): min=169, max=597, avg=274.11, stdev=81.13 00:10:17.296 clat percentiles (usec): 00:10:17.296 | 1.00th=[ 169], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 206], 00:10:17.296 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 237], 60.00th=[ 251], 00:10:17.296 | 70.00th=[ 273], 80.00th=[ 297], 90.00th=[ 375], 95.00th=[ 420], 00:10:17.296 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 537], 99.95th=[ 537], 00:10:17.296 | 99.99th=[ 537] 00:10:17.296 bw ( KiB/s): min= 4087, max= 4087, per=51.94%, avg=4087.00, stdev= 0.00, samples=1 00:10:17.296 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:17.296 lat (usec) : 250=56.66%, 500=38.65%, 750=0.75% 00:10:17.296 lat (msec) : 50=3.94% 00:10:17.296 cpu : usr=0.50%, sys=0.70%, ctx=533, majf=0, minf=1 00:10:17.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.296 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.296 job3: (groupid=0, jobs=1): err= 0: pid=3693043: Wed Jul 24 08:55:55 2024 00:10:17.296 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:10:17.296 slat (nsec): min=10926, max=37960, avg=29457.45, stdev=9443.76 00:10:17.296 clat (usec): min=40580, max=41068, avg=40943.47, stdev=88.93 00:10:17.296 lat (usec): min=40591, max=41086, avg=40972.93, stdev=90.72 00:10:17.296 clat percentiles (usec): 00:10:17.296 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:17.296 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:17.296 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:17.296 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:17.296 | 99.99th=[41157] 00:10:17.296 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:17.296 slat (nsec): min=8331, max=51308, avg=17632.95, stdev=6863.28 00:10:17.296 clat (usec): min=184, max=307, avg=228.57, stdev=21.90 00:10:17.296 lat (usec): min=196, max=359, avg=246.20, stdev=25.13 00:10:17.296 clat percentiles (usec): 00:10:17.296 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:10:17.296 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 231], 00:10:17.296 | 70.00th=[ 239], 80.00th=[ 249], 90.00th=[ 260], 95.00th=[ 269], 00:10:17.296 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 310], 99.95th=[ 310], 00:10:17.296 | 99.99th=[ 310] 00:10:17.296 bw ( KiB/s): min= 4087, max= 4087, per=51.94%, avg=4087.00, stdev= 0.00, samples=1 00:10:17.296 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:17.296 lat (usec) : 250=78.09%, 500=17.79% 00:10:17.296 lat (msec) : 50=4.12% 00:10:17.296 cpu : usr=0.68%, sys=1.07%, ctx=534, majf=0, minf=1 00:10:17.296 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.296 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.296 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.296 00:10:17.296 Run status group 0 (all jobs): 00:10:17.296 READ: bw=380KiB/s (390kB/s), 83.4KiB/s-135KiB/s (85.4kB/s-138kB/s), io=396KiB (406kB), run=1007-1041msec 00:10:17.296 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2034KiB/s (2015kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1041msec 00:10:17.296 00:10:17.296 Disk stats (read/write): 00:10:17.296 nvme0n1: ios=40/512, merge=0/0, ticks=1559/118, in_queue=1677, util=85.07% 00:10:17.296 nvme0n2: ios=78/512, merge=0/0, ticks=947/113, in_queue=1060, util=89.10% 00:10:17.296 nvme0n3: ios=74/512, merge=0/0, ticks=789/130, in_queue=919, util=94.75% 00:10:17.296 nvme0n4: ios=74/512, merge=0/0, ticks=775/110, in_queue=885, util=95.87% 00:10:17.296 08:55:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:17.296 [global] 00:10:17.296 thread=1 00:10:17.296 invalidate=1 00:10:17.296 rw=randwrite 00:10:17.296 time_based=1 00:10:17.296 runtime=1 00:10:17.296 ioengine=libaio 00:10:17.296 direct=1 00:10:17.296 bs=4096 00:10:17.296 iodepth=1 00:10:17.296 norandommap=0 00:10:17.296 numjobs=1 00:10:17.296 00:10:17.296 verify_dump=1 00:10:17.296 verify_backlog=512 00:10:17.296 verify_state_save=0 00:10:17.296 do_verify=1 00:10:17.296 verify=crc32c-intel 00:10:17.296 [job0] 00:10:17.296 filename=/dev/nvme0n1 00:10:17.296 [job1] 00:10:17.296 filename=/dev/nvme0n2 00:10:17.296 [job2] 00:10:17.296 filename=/dev/nvme0n3 00:10:17.296 [job3] 00:10:17.296 filename=/dev/nvme0n4 00:10:17.296 Could not set queue depth (nvme0n1) 00:10:17.296 Could not set queue depth (nvme0n2) 00:10:17.296 Could not set queue depth (nvme0n3) 00:10:17.296 Could not set queue depth (nvme0n4) 00:10:17.296 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.296 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.296 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.296 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:17.296 fio-3.35 00:10:17.296 Starting 4 threads 00:10:18.671 00:10:18.671 job0: (groupid=0, jobs=1): err= 0: pid=3693371: Wed Jul 24 08:55:56 2024 00:10:18.671 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:18.671 slat (nsec): min=6569, max=58147, avg=16206.24, stdev=5192.75 00:10:18.671 clat (usec): min=266, max=710, avg=352.09, stdev=53.11 00:10:18.671 lat (usec): min=276, max=727, avg=368.29, stdev=54.90 00:10:18.671 clat percentiles (usec): 00:10:18.671 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 318], 00:10:18.671 | 30.00th=[ 322], 40.00th=[ 330], 50.00th=[ 338], 60.00th=[ 347], 00:10:18.672 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 433], 95.00th=[ 478], 00:10:18.672 | 99.00th=[ 506], 99.50th=[ 515], 99.90th=[ 660], 99.95th=[ 709], 00:10:18.672 | 99.99th=[ 709] 00:10:18.672 write: IOPS=1728, BW=6913KiB/s (7079kB/s)(6920KiB/1001msec); 0 zone resets 00:10:18.672 slat (nsec): min=7733, max=61792, avg=19045.05, stdev=8047.64 00:10:18.672 clat (usec): min=154, max=381, avg=222.34, stdev=32.73 00:10:18.672 lat (usec): min=163, max=392, avg=241.39, stdev=35.72 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 200], 00:10:18.672 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 221], 60.00th=[ 227], 00:10:18.672 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 255], 95.00th=[ 293], 00:10:18.672 | 99.00th=[ 334], 99.50th=[ 347], 99.90th=[ 355], 99.95th=[ 383], 00:10:18.672 | 99.99th=[ 383] 00:10:18.672 bw ( KiB/s): min= 8192, max= 8192, per=65.15%, avg=8192.00, stdev= 0.00, samples=1 00:10:18.672 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:18.672 lat (usec) : 250=46.66%, 500=52.60%, 750=0.73% 00:10:18.672 cpu : usr=3.70%, sys=8.30%, ctx=3267, majf=0, minf=1 00:10:18.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 issued rwts: total=1536,1730,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.672 job1: (groupid=0, jobs=1): err= 0: pid=3693388: Wed Jul 24 08:55:56 2024 00:10:18.672 read: IOPS=21, BW=84.8KiB/s (86.8kB/s)(88.0KiB/1038msec) 00:10:18.672 slat (nsec): min=12614, max=34451, avg=24445.95, stdev=9324.55 00:10:18.672 clat (usec): min=40400, max=41101, avg=40947.64, stdev=128.98 00:10:18.672 lat (usec): min=40420, max=41115, avg=40972.09, stdev=128.67 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:18.672 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.672 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:18.672 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:18.672 | 99.99th=[41157] 00:10:18.672 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:18.672 slat (nsec): min=7545, max=63939, avg=18152.57, stdev=7873.24 00:10:18.672 clat (usec): min=177, max=473, avg=241.45, stdev=38.44 00:10:18.672 lat (usec): min=192, max=514, avg=259.60, stdev=39.96 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 217], 00:10:18.672 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 241], 00:10:18.672 | 70.00th=[ 249], 80.00th=[ 262], 90.00th=[ 277], 95.00th=[ 302], 00:10:18.672 | 99.00th=[ 404], 99.50th=[ 453], 99.90th=[ 474], 99.95th=[ 474], 00:10:18.672 | 99.99th=[ 474] 00:10:18.672 bw ( KiB/s): min= 4096, max= 4096, per=32.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.672 lat (usec) : 250=68.73%, 500=27.15% 00:10:18.672 lat (msec) : 50=4.12% 00:10:18.672 cpu : usr=1.06%, sys=0.77%, ctx=534, majf=0, minf=2 00:10:18.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.672 job2: (groupid=0, jobs=1): err= 0: pid=3693389: Wed Jul 24 08:55:56 2024 00:10:18.672 read: IOPS=26, BW=104KiB/s (107kB/s)(108KiB/1038msec) 00:10:18.672 slat (nsec): min=11518, max=35103, avg=23186.70, stdev=9117.66 00:10:18.672 clat (usec): min=399, max=41392, avg=33462.57, stdev=16043.74 00:10:18.672 lat (usec): min=418, max=41410, avg=33485.76, stdev=16045.68 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 433], 20.00th=[40633], 00:10:18.672 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.672 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:18.672 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:18.672 | 99.99th=[41157] 00:10:18.672 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:18.672 slat (nsec): min=7864, max=55798, avg=19153.95, stdev=7798.93 00:10:18.672 clat (usec): min=181, max=389, avg=237.05, stdev=28.78 00:10:18.672 lat (usec): min=196, max=403, avg=256.21, stdev=28.94 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[ 190], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 217], 00:10:18.672 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:10:18.672 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 289], 00:10:18.672 | 99.00th=[ 367], 99.50th=[ 383], 99.90th=[ 392], 99.95th=[ 392], 00:10:18.672 | 99.99th=[ 392] 00:10:18.672 bw ( KiB/s): min= 4096, max= 4096, per=32.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.672 lat (usec) : 250=73.10%, 500=22.82% 00:10:18.672 lat (msec) : 50=4.08% 00:10:18.672 cpu : usr=0.77%, sys=1.16%, ctx=542, majf=0, minf=1 00:10:18.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.672 job3: (groupid=0, jobs=1): err= 0: pid=3693390: Wed Jul 24 08:55:56 2024 00:10:18.672 read: IOPS=22, BW=88.5KiB/s (90.7kB/s)(92.0KiB/1039msec) 00:10:18.672 slat (nsec): min=7498, max=36159, avg=24490.00, stdev=10237.28 00:10:18.672 clat (usec): min=338, max=41164, avg=39198.16, stdev=8471.79 00:10:18.672 lat (usec): min=351, max=41179, avg=39222.65, stdev=8474.12 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[ 338], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:18.672 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:18.672 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:18.672 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:18.672 | 99.99th=[41157] 00:10:18.672 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:10:18.672 slat (nsec): min=7662, max=59423, avg=18119.53, stdev=7215.20 00:10:18.672 clat (usec): min=188, max=323, avg=242.54, stdev=21.94 00:10:18.672 lat (usec): min=197, max=376, avg=260.66, stdev=23.52 00:10:18.672 clat percentiles (usec): 00:10:18.672 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:10:18.672 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 245], 00:10:18.672 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:10:18.672 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 322], 99.95th=[ 322], 00:10:18.672 | 99.99th=[ 322] 00:10:18.672 bw ( KiB/s): min= 4096, max= 4096, per=32.58%, avg=4096.00, stdev= 0.00, samples=1 00:10:18.672 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:18.672 lat (usec) : 250=63.18%, 500=32.71% 00:10:18.672 lat (msec) : 50=4.11% 00:10:18.672 cpu : usr=0.39%, sys=1.45%, ctx=535, majf=0, minf=1 00:10:18.672 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:18.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.672 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.672 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:18.672 00:10:18.672 Run status group 0 (all jobs): 00:10:18.672 READ: bw=6191KiB/s (6339kB/s), 84.8KiB/s-6138KiB/s (86.8kB/s-6285kB/s), io=6432KiB (6586kB), run=1001-1039msec 00:10:18.672 WRITE: bw=12.3MiB/s (12.9MB/s), 1971KiB/s-6913KiB/s (2018kB/s-7079kB/s), io=12.8MiB (13.4MB), run=1001-1039msec 00:10:18.672 00:10:18.672 Disk stats (read/write): 00:10:18.672 nvme0n1: ios=1229/1536, merge=0/0, ticks=1284/338, in_queue=1622, util=85.37% 00:10:18.672 nvme0n2: ios=67/512, merge=0/0, ticks=757/115, in_queue=872, util=90.74% 00:10:18.672 nvme0n3: ios=44/512, merge=0/0, ticks=1603/104, in_queue=1707, util=93.42% 00:10:18.672 nvme0n4: ios=75/512, merge=0/0, ticks=776/114, in_queue=890, util=95.79% 00:10:18.672 08:55:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:18.672 [global] 00:10:18.672 thread=1 00:10:18.672 invalidate=1 00:10:18.672 rw=write 00:10:18.672 time_based=1 00:10:18.672 runtime=1 00:10:18.672 ioengine=libaio 00:10:18.672 direct=1 00:10:18.672 bs=4096 00:10:18.672 iodepth=128 00:10:18.672 norandommap=0 00:10:18.672 numjobs=1 00:10:18.672 00:10:18.672 verify_dump=1 00:10:18.672 verify_backlog=512 00:10:18.672 verify_state_save=0 00:10:18.672 do_verify=1 00:10:18.672 verify=crc32c-intel 00:10:18.672 [job0] 00:10:18.672 filename=/dev/nvme0n1 00:10:18.672 [job1] 00:10:18.672 filename=/dev/nvme0n2 00:10:18.672 [job2] 00:10:18.672 filename=/dev/nvme0n3 00:10:18.672 [job3] 00:10:18.672 filename=/dev/nvme0n4 00:10:18.672 Could not set queue depth (nvme0n1) 00:10:18.672 Could not set queue depth (nvme0n2) 00:10:18.672 Could not set queue depth (nvme0n3) 00:10:18.672 Could not set queue depth (nvme0n4) 00:10:18.930 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.930 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.930 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.930 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:18.930 fio-3.35 00:10:18.930 Starting 4 threads 00:10:20.303 00:10:20.304 job0: (groupid=0, jobs=1): err= 0: pid=3693620: Wed Jul 24 08:55:58 2024 00:10:20.304 read: IOPS=4653, BW=18.2MiB/s (19.1MB/s)(18.2MiB/1004msec) 00:10:20.304 slat (usec): min=3, max=7156, avg=93.31, stdev=553.52 00:10:20.304 clat (usec): min=3409, max=27616, avg=12035.60, stdev=3503.25 00:10:20.304 lat (usec): min=5210, max=27630, avg=12128.91, stdev=3548.16 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 7177], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10290], 00:10:20.304 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:10:20.304 | 70.00th=[11469], 80.00th=[13435], 90.00th=[19006], 95.00th=[20317], 00:10:20.304 | 99.00th=[23200], 99.50th=[25035], 99.90th=[27657], 99.95th=[27657], 00:10:20.304 | 99.99th=[27657] 00:10:20.304 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:10:20.304 slat (usec): min=4, max=18491, avg=99.20, stdev=766.53 00:10:20.304 clat (usec): min=4573, max=54708, avg=13857.03, stdev=6661.68 00:10:20.304 lat (usec): min=4588, max=54726, avg=13956.23, stdev=6732.92 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 5473], 5.00th=[ 8029], 10.00th=[ 9503], 20.00th=[10290], 00:10:20.304 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:10:20.304 | 70.00th=[13566], 80.00th=[16450], 90.00th=[22938], 95.00th=[30802], 00:10:20.304 | 99.00th=[37487], 99.50th=[37487], 99.90th=[38011], 99.95th=[41681], 00:10:20.304 | 99.99th=[54789] 00:10:20.304 bw ( KiB/s): min=19960, max=20496, per=32.94%, avg=20228.00, stdev=379.01, samples=2 00:10:20.304 iops : min= 4990, max= 5124, avg=5057.00, stdev=94.75, samples=2 00:10:20.304 lat (msec) : 4=0.01%, 10=13.44%, 20=75.30%, 50=11.23%, 100=0.02% 00:10:20.304 cpu : usr=7.38%, sys=10.57%, ctx=455, majf=0, minf=1 00:10:20.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:20.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.304 issued rwts: total=4672,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.304 job1: (groupid=0, jobs=1): err= 0: pid=3693621: Wed Jul 24 08:55:58 2024 00:10:20.304 read: IOPS=2414, BW=9657KiB/s (9889kB/s)(9.86MiB/1045msec) 00:10:20.304 slat (usec): min=2, max=25452, avg=188.07, stdev=1352.14 00:10:20.304 clat (usec): min=973, max=78472, avg=26368.51, stdev=16023.95 00:10:20.304 lat (usec): min=980, max=78478, avg=26556.58, stdev=16134.96 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 1418], 5.00th=[10421], 10.00th=[10945], 20.00th=[13829], 00:10:20.304 | 30.00th=[15533], 40.00th=[16450], 50.00th=[19792], 60.00th=[23725], 00:10:20.304 | 70.00th=[31851], 80.00th=[41157], 90.00th=[50594], 95.00th=[59507], 00:10:20.304 | 99.00th=[70779], 99.50th=[70779], 99.90th=[78119], 99.95th=[78119], 00:10:20.304 | 99.99th=[78119] 00:10:20.304 write: IOPS=2449, BW=9799KiB/s (10.0MB/s)(10.0MiB/1045msec); 0 zone resets 00:10:20.304 slat (usec): min=4, max=25872, avg=194.05, stdev=1206.94 00:10:20.304 clat (usec): min=5896, max=93952, avg=25785.44, stdev=18402.85 00:10:20.304 lat (usec): min=5907, max=93960, avg=25979.49, stdev=18523.28 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 8848], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:10:20.304 | 30.00th=[12256], 40.00th=[16057], 50.00th=[16712], 60.00th=[22938], 00:10:20.304 | 70.00th=[34341], 80.00th=[40633], 90.00th=[54264], 95.00th=[62129], 00:10:20.304 | 99.00th=[83362], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:10:20.304 | 99.99th=[93848] 00:10:20.304 bw ( KiB/s): min= 8192, max=12288, per=16.68%, avg=10240.00, stdev=2896.31, samples=2 00:10:20.304 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:10:20.304 lat (usec) : 1000=0.08% 00:10:20.304 lat (msec) : 2=0.57%, 10=8.24%, 20=43.36%, 50=35.65%, 100=12.10% 00:10:20.304 cpu : usr=2.59%, sys=5.94%, ctx=215, majf=0, minf=1 00:10:20.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:20.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.304 issued rwts: total=2523,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.304 job2: (groupid=0, jobs=1): err= 0: pid=3693622: Wed Jul 24 08:55:58 2024 00:10:20.304 read: IOPS=3568, BW=13.9MiB/s (14.6MB/s)(14.1MiB/1008msec) 00:10:20.304 slat (usec): min=2, max=18073, avg=111.85, stdev=799.78 00:10:20.304 clat (usec): min=2442, max=25728, avg=15086.59, stdev=3657.78 00:10:20.304 lat (usec): min=2463, max=25731, avg=15198.43, stdev=3695.83 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[12911], 00:10:20.304 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14615], 60.00th=[15270], 00:10:20.304 | 70.00th=[16581], 80.00th=[17695], 90.00th=[19530], 95.00th=[21365], 00:10:20.304 | 99.00th=[25297], 99.50th=[25297], 99.90th=[25560], 99.95th=[25822], 00:10:20.304 | 99.99th=[25822] 00:10:20.304 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:10:20.304 slat (usec): min=3, max=41330, avg=99.53, stdev=910.77 00:10:20.304 clat (usec): min=424, max=79725, avg=16179.04, stdev=11000.90 00:10:20.304 lat (usec): min=437, max=79731, avg=16278.57, stdev=11083.62 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 1680], 5.00th=[ 3556], 10.00th=[ 6390], 20.00th=[ 8979], 00:10:20.304 | 30.00th=[10945], 40.00th=[12256], 50.00th=[13173], 60.00th=[14091], 00:10:20.304 | 70.00th=[16057], 80.00th=[22676], 90.00th=[32375], 95.00th=[38011], 00:10:20.304 | 99.00th=[63701], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:10:20.304 | 99.99th=[80217] 00:10:20.304 bw ( KiB/s): min=13440, max=18408, per=25.93%, avg=15924.00, stdev=3512.91, samples=2 00:10:20.304 iops : min= 3360, max= 4602, avg=3981.00, stdev=878.23, samples=2 00:10:20.304 lat (usec) : 500=0.06%, 1000=0.03% 00:10:20.304 lat (msec) : 2=0.79%, 4=2.77%, 10=11.96%, 20=68.19%, 50=15.31% 00:10:20.304 lat (msec) : 100=0.88% 00:10:20.304 cpu : usr=3.57%, sys=7.35%, ctx=354, majf=0, minf=1 00:10:20.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:20.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.304 issued rwts: total=3597,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.304 job3: (groupid=0, jobs=1): err= 0: pid=3693623: Wed Jul 24 08:55:58 2024 00:10:20.304 read: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec) 00:10:20.304 slat (usec): min=3, max=39176, avg=120.46, stdev=917.42 00:10:20.304 clat (usec): min=8890, max=90217, avg=16083.40, stdev=10657.32 00:10:20.304 lat (usec): min=8899, max=90232, avg=16203.86, stdev=10734.83 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[10028], 5.00th=[11076], 10.00th=[11863], 20.00th=[12387], 00:10:20.304 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:10:20.304 | 70.00th=[14091], 80.00th=[15664], 90.00th=[18220], 95.00th=[39584], 00:10:20.304 | 99.00th=[81265], 99.50th=[81265], 99.90th=[81265], 99.95th=[81265], 00:10:20.304 | 99.99th=[90702] 00:10:20.304 write: IOPS=4232, BW=16.5MiB/s (17.3MB/s)(16.7MiB/1008msec); 0 zone resets 00:10:20.304 slat (usec): min=5, max=17953, avg=107.45, stdev=630.48 00:10:20.304 clat (usec): min=5710, max=55278, avg=14393.30, stdev=6008.37 00:10:20.304 lat (usec): min=7446, max=55331, avg=14500.76, stdev=6047.89 00:10:20.304 clat percentiles (usec): 00:10:20.304 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10814], 20.00th=[11994], 00:10:20.304 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:10:20.304 | 70.00th=[13566], 80.00th=[13960], 90.00th=[16909], 95.00th=[25297], 00:10:20.304 | 99.00th=[49546], 99.50th=[50070], 99.90th=[50070], 99.95th=[50070], 00:10:20.304 | 99.99th=[55313] 00:10:20.304 bw ( KiB/s): min=14816, max=18288, per=26.96%, avg=16552.00, stdev=2455.07, samples=2 00:10:20.304 iops : min= 3704, max= 4572, avg=4138.00, stdev=613.77, samples=2 00:10:20.304 lat (msec) : 10=2.07%, 20=90.67%, 50=5.20%, 100=2.06% 00:10:20.304 cpu : usr=7.05%, sys=8.94%, ctx=404, majf=0, minf=1 00:10:20.304 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:20.304 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.304 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.304 issued rwts: total=4096,4266,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.304 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.304 00:10:20.304 Run status group 0 (all jobs): 00:10:20.304 READ: bw=55.7MiB/s (58.4MB/s), 9657KiB/s-18.2MiB/s (9889kB/s-19.1MB/s), io=58.2MiB (61.0MB), run=1004-1045msec 00:10:20.304 WRITE: bw=60.0MiB/s (62.9MB/s), 9799KiB/s-19.9MiB/s (10.0MB/s-20.9MB/s), io=62.7MiB (65.7MB), run=1004-1045msec 00:10:20.304 00:10:20.304 Disk stats (read/write): 00:10:20.304 nvme0n1: ios=4209/4608, merge=0/0, ticks=21676/28940, in_queue=50616, util=86.37% 00:10:20.304 nvme0n2: ios=2050/2048, merge=0/0, ticks=25533/31596, in_queue=57129, util=90.44% 00:10:20.304 nvme0n3: ios=3000/3072, merge=0/0, ticks=36508/41178, in_queue=77686, util=99.69% 00:10:20.304 nvme0n4: ios=3706/4096, merge=0/0, ticks=15598/17450, in_queue=33048, util=100.00% 00:10:20.304 08:55:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:20.304 [global] 00:10:20.304 thread=1 00:10:20.304 invalidate=1 00:10:20.304 rw=randwrite 00:10:20.304 time_based=1 00:10:20.304 runtime=1 00:10:20.304 ioengine=libaio 00:10:20.304 direct=1 00:10:20.304 bs=4096 00:10:20.304 iodepth=128 00:10:20.304 norandommap=0 00:10:20.304 numjobs=1 00:10:20.304 00:10:20.304 verify_dump=1 00:10:20.304 verify_backlog=512 00:10:20.304 verify_state_save=0 00:10:20.304 do_verify=1 00:10:20.304 verify=crc32c-intel 00:10:20.304 [job0] 00:10:20.304 filename=/dev/nvme0n1 00:10:20.304 [job1] 00:10:20.304 filename=/dev/nvme0n2 00:10:20.304 [job2] 00:10:20.304 filename=/dev/nvme0n3 00:10:20.304 [job3] 00:10:20.304 filename=/dev/nvme0n4 00:10:20.304 Could not set queue depth (nvme0n1) 00:10:20.304 Could not set queue depth (nvme0n2) 00:10:20.304 Could not set queue depth (nvme0n3) 00:10:20.304 Could not set queue depth (nvme0n4) 00:10:20.305 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.305 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.305 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.305 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:20.305 fio-3.35 00:10:20.305 Starting 4 threads 00:10:21.679 00:10:21.679 job0: (groupid=0, jobs=1): err= 0: pid=3693855: Wed Jul 24 08:55:59 2024 00:10:21.679 read: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec) 00:10:21.679 slat (usec): min=3, max=22892, avg=136.41, stdev=942.93 00:10:21.679 clat (usec): min=10755, max=62104, avg=17567.14, stdev=7172.86 00:10:21.679 lat (usec): min=10764, max=62142, avg=17703.55, stdev=7253.61 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[11338], 5.00th=[13042], 10.00th=[13698], 20.00th=[14091], 00:10:21.679 | 30.00th=[14484], 40.00th=[15270], 50.00th=[15401], 60.00th=[15664], 00:10:21.679 | 70.00th=[15926], 80.00th=[17171], 90.00th=[24249], 95.00th=[37487], 00:10:21.679 | 99.00th=[45876], 99.50th=[46400], 99.90th=[47449], 99.95th=[50594], 00:10:21.679 | 99.99th=[62129] 00:10:21.679 write: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(11.6MiB/1010msec); 0 zone resets 00:10:21.679 slat (usec): min=4, max=30228, avg=209.86, stdev=1309.05 00:10:21.679 clat (usec): min=9050, max=76186, avg=27614.51, stdev=16608.80 00:10:21.679 lat (usec): min=9560, max=76248, avg=27824.37, stdev=16706.37 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[11207], 5.00th=[13304], 10.00th=[13698], 20.00th=[13960], 00:10:21.679 | 30.00th=[14222], 40.00th=[20055], 50.00th=[23462], 60.00th=[23987], 00:10:21.679 | 70.00th=[29230], 80.00th=[41157], 90.00th=[58983], 95.00th=[66847], 00:10:21.679 | 99.00th=[68682], 99.50th=[68682], 99.90th=[68682], 99.95th=[72877], 00:10:21.679 | 99.99th=[76022] 00:10:21.679 bw ( KiB/s): min=10496, max=12288, per=17.26%, avg=11392.00, stdev=1267.14, samples=2 00:10:21.679 iops : min= 2624, max= 3072, avg=2848.00, stdev=316.78, samples=2 00:10:21.679 lat (msec) : 10=0.13%, 20=60.96%, 50=31.92%, 100=6.99% 00:10:21.679 cpu : usr=3.47%, sys=4.96%, ctx=278, majf=0, minf=11 00:10:21.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:10:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.679 issued rwts: total=2560,2975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.679 job1: (groupid=0, jobs=1): err= 0: pid=3693856: Wed Jul 24 08:55:59 2024 00:10:21.679 read: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec) 00:10:21.679 slat (usec): min=2, max=5093, avg=84.30, stdev=481.38 00:10:21.679 clat (usec): min=6677, max=18668, avg=10708.62, stdev=1431.83 00:10:21.679 lat (usec): min=6738, max=18685, avg=10792.91, stdev=1472.43 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[ 7373], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[10159], 00:10:21.679 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:10:21.679 | 70.00th=[10945], 80.00th=[11338], 90.00th=[12387], 95.00th=[13435], 00:10:21.679 | 99.00th=[14877], 99.50th=[15401], 99.90th=[18744], 99.95th=[18744], 00:10:21.679 | 99.99th=[18744] 00:10:21.679 write: IOPS=6058, BW=23.7MiB/s (24.8MB/s)(23.8MiB/1006msec); 0 zone resets 00:10:21.679 slat (usec): min=4, max=6318, avg=76.90, stdev=330.85 00:10:21.679 clat (usec): min=5012, max=16893, avg=10953.55, stdev=1371.96 00:10:21.679 lat (usec): min=5547, max=16939, avg=11030.45, stdev=1393.70 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[ 6718], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10290], 00:10:21.679 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:10:21.679 | 70.00th=[11207], 80.00th=[11338], 90.00th=[12125], 95.00th=[13698], 00:10:21.679 | 99.00th=[15270], 99.50th=[15664], 99.90th=[16909], 99.95th=[16909], 00:10:21.679 | 99.99th=[16909] 00:10:21.679 bw ( KiB/s): min=23168, max=24576, per=36.17%, avg=23872.00, stdev=995.61, samples=2 00:10:21.679 iops : min= 5792, max= 6144, avg=5968.00, stdev=248.90, samples=2 00:10:21.679 lat (msec) : 10=15.65%, 20=84.35% 00:10:21.679 cpu : usr=6.67%, sys=11.34%, ctx=689, majf=0, minf=13 00:10:21.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.679 issued rwts: total=5632,6095,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.679 job2: (groupid=0, jobs=1): err= 0: pid=3693857: Wed Jul 24 08:55:59 2024 00:10:21.679 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:10:21.679 slat (usec): min=3, max=7934, avg=105.51, stdev=585.42 00:10:21.679 clat (usec): min=8786, max=22136, avg=13259.02, stdev=1802.65 00:10:21.679 lat (usec): min=8794, max=22156, avg=13364.53, stdev=1856.09 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[11207], 20.00th=[12518], 00:10:21.679 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:10:21.679 | 70.00th=[13304], 80.00th=[13960], 90.00th=[15664], 95.00th=[16909], 00:10:21.679 | 99.00th=[19006], 99.50th=[19530], 99.90th=[22152], 99.95th=[22152], 00:10:21.679 | 99.99th=[22152] 00:10:21.679 write: IOPS=4722, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1006msec); 0 zone resets 00:10:21.679 slat (usec): min=4, max=6031, avg=98.05, stdev=384.74 00:10:21.679 clat (usec): min=5647, max=22292, avg=13913.84, stdev=1749.65 00:10:21.679 lat (usec): min=6379, max=22800, avg=14011.89, stdev=1773.86 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[ 8586], 5.00th=[10814], 10.00th=[12518], 20.00th=[13304], 00:10:21.679 | 30.00th=[13566], 40.00th=[13829], 50.00th=[13960], 60.00th=[14091], 00:10:21.679 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14877], 95.00th=[17171], 00:10:21.679 | 99.00th=[19530], 99.50th=[21103], 99.90th=[22152], 99.95th=[22152], 00:10:21.679 | 99.99th=[22414] 00:10:21.679 bw ( KiB/s): min=17176, max=19816, per=28.02%, avg=18496.00, stdev=1866.76, samples=2 00:10:21.679 iops : min= 4294, max= 4954, avg=4624.00, stdev=466.69, samples=2 00:10:21.679 lat (msec) : 10=3.48%, 20=95.83%, 50=0.68% 00:10:21.679 cpu : usr=6.07%, sys=9.45%, ctx=616, majf=0, minf=11 00:10:21.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.679 issued rwts: total=4608,4751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.679 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.679 job3: (groupid=0, jobs=1): err= 0: pid=3693858: Wed Jul 24 08:55:59 2024 00:10:21.679 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:10:21.679 slat (usec): min=3, max=15947, avg=162.43, stdev=1008.38 00:10:21.679 clat (usec): min=6325, max=54785, avg=17958.12, stdev=7962.85 00:10:21.679 lat (usec): min=6334, max=54801, avg=18120.55, stdev=8046.65 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[ 6915], 5.00th=[10683], 10.00th=[13304], 20.00th=[13566], 00:10:21.679 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[16057], 00:10:21.679 | 70.00th=[17957], 80.00th=[20317], 90.00th=[27395], 95.00th=[35390], 00:10:21.679 | 99.00th=[51119], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:10:21.679 | 99.99th=[54789] 00:10:21.679 write: IOPS=2869, BW=11.2MiB/s (11.8MB/s)(11.4MiB/1014msec); 0 zone resets 00:10:21.679 slat (usec): min=4, max=14049, avg=191.96, stdev=835.63 00:10:21.679 clat (usec): min=3171, max=58899, avg=28405.23, stdev=13923.81 00:10:21.679 lat (usec): min=3178, max=58906, avg=28597.19, stdev=14026.85 00:10:21.679 clat percentiles (usec): 00:10:21.679 | 1.00th=[ 4555], 5.00th=[10552], 10.00th=[12911], 20.00th=[15139], 00:10:21.679 | 30.00th=[18744], 40.00th=[23462], 50.00th=[23987], 60.00th=[27657], 00:10:21.679 | 70.00th=[36963], 80.00th=[43254], 90.00th=[50070], 95.00th=[54264], 00:10:21.679 | 99.00th=[58459], 99.50th=[58459], 99.90th=[58983], 99.95th=[58983], 00:10:21.679 | 99.99th=[58983] 00:10:21.679 bw ( KiB/s): min= 9992, max=12272, per=16.87%, avg=11132.00, stdev=1612.20, samples=2 00:10:21.679 iops : min= 2498, max= 3068, avg=2783.00, stdev=403.05, samples=2 00:10:21.679 lat (msec) : 4=0.33%, 10=3.07%, 20=50.51%, 50=40.38%, 100=5.70% 00:10:21.679 cpu : usr=3.65%, sys=4.54%, ctx=346, majf=0, minf=15 00:10:21.679 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:21.679 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:21.679 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:21.679 issued rwts: total=2560,2910,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:21.680 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:21.680 00:10:21.680 Run status group 0 (all jobs): 00:10:21.680 READ: bw=59.2MiB/s (62.0MB/s), 9.86MiB/s-21.9MiB/s (10.3MB/s-22.9MB/s), io=60.0MiB (62.9MB), run=1006-1014msec 00:10:21.680 WRITE: bw=64.5MiB/s (67.6MB/s), 11.2MiB/s-23.7MiB/s (11.8MB/s-24.8MB/s), io=65.4MiB (68.5MB), run=1006-1014msec 00:10:21.680 00:10:21.680 Disk stats (read/write): 00:10:21.680 nvme0n1: ios=2286/2560, merge=0/0, ticks=19793/32609, in_queue=52402, util=99.50% 00:10:21.680 nvme0n2: ios=4848/5120, merge=0/0, ticks=26005/25277, in_queue=51282, util=100.00% 00:10:21.680 nvme0n3: ios=3874/4096, merge=0/0, ticks=25341/26299, in_queue=51640, util=100.00% 00:10:21.680 nvme0n4: ios=2090/2383, merge=0/0, ticks=36278/67034, in_queue=103312, util=90.56% 00:10:21.680 08:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:21.680 08:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3693994 00:10:21.680 08:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:21.680 08:55:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:21.680 [global] 00:10:21.680 thread=1 00:10:21.680 invalidate=1 00:10:21.680 rw=read 00:10:21.680 time_based=1 00:10:21.680 runtime=10 00:10:21.680 ioengine=libaio 00:10:21.680 direct=1 00:10:21.680 bs=4096 00:10:21.680 iodepth=1 00:10:21.680 norandommap=1 00:10:21.680 numjobs=1 00:10:21.680 00:10:21.680 [job0] 00:10:21.680 filename=/dev/nvme0n1 00:10:21.680 [job1] 00:10:21.680 filename=/dev/nvme0n2 00:10:21.680 [job2] 00:10:21.680 filename=/dev/nvme0n3 00:10:21.680 [job3] 00:10:21.680 filename=/dev/nvme0n4 00:10:21.680 Could not set queue depth (nvme0n1) 00:10:21.680 Could not set queue depth (nvme0n2) 00:10:21.680 Could not set queue depth (nvme0n3) 00:10:21.680 Could not set queue depth (nvme0n4) 00:10:21.680 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.680 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.680 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.680 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.680 fio-3.35 00:10:21.680 Starting 4 threads 00:10:24.960 08:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:24.960 08:56:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:24.960 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=14409728, buflen=4096 00:10:24.960 fio: pid=3694117, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:24.960 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:24.960 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:25.250 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=417792, buflen=4096 00:10:25.250 fio: pid=3694108, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:25.250 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.250 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:25.250 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=24363008, buflen=4096 00:10:25.250 fio: pid=3694088, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:25.507 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.507 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:25.507 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=376832, buflen=4096 00:10:25.507 fio: pid=3694090, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:10:25.765 00:10:25.765 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3694088: Wed Jul 24 08:56:03 2024 00:10:25.765 read: IOPS=1721, BW=6886KiB/s (7052kB/s)(23.2MiB/3455msec) 00:10:25.765 slat (usec): min=4, max=11846, avg=19.83, stdev=216.11 00:10:25.765 clat (usec): min=225, max=44980, avg=553.48, stdev=3025.80 00:10:25.765 lat (usec): min=237, max=52991, avg=573.31, stdev=3060.40 00:10:25.765 clat percentiles (usec): 00:10:25.765 | 1.00th=[ 258], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:10:25.765 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 326], 60.00th=[ 330], 00:10:25.765 | 70.00th=[ 338], 80.00th=[ 355], 90.00th=[ 379], 95.00th=[ 392], 00:10:25.765 | 99.00th=[ 498], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:25.765 | 99.99th=[44827] 00:10:25.765 bw ( KiB/s): min= 136, max=11632, per=70.39%, avg=7333.33, stdev=4510.12, samples=6 00:10:25.765 iops : min= 34, max= 2908, avg=1833.33, stdev=1127.53, samples=6 00:10:25.765 lat (usec) : 250=0.62%, 500=98.42%, 750=0.37%, 1000=0.02% 00:10:25.765 lat (msec) : 20=0.02%, 50=0.54% 00:10:25.765 cpu : usr=1.10%, sys=3.71%, ctx=5953, majf=0, minf=1 00:10:25.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.765 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.765 issued rwts: total=5949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.765 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3694090: Wed Jul 24 08:56:03 2024 00:10:25.765 read: IOPS=25, BW=99.2KiB/s (102kB/s)(368KiB/3709msec) 00:10:25.765 slat (usec): min=10, max=32852, avg=653.15, stdev=3719.25 00:10:25.765 clat (usec): min=357, max=41134, avg=39648.06, stdev=7244.85 00:10:25.765 lat (usec): min=374, max=73987, avg=40228.96, stdev=8225.78 00:10:25.765 clat percentiles (usec): 00:10:25.765 | 1.00th=[ 359], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:25.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:25.765 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.765 | 99.99th=[41157] 00:10:25.765 bw ( KiB/s): min= 87, max= 112, per=0.95%, avg=99.29, stdev= 9.29, samples=7 00:10:25.765 iops : min= 21, max= 28, avg=24.71, stdev= 2.50, samples=7 00:10:25.765 lat (usec) : 500=3.23% 00:10:25.765 lat (msec) : 50=95.70% 00:10:25.765 cpu : usr=0.00%, sys=0.27%, ctx=96, majf=0, minf=1 00:10:25.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.765 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.765 issued rwts: total=93,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.765 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3694108: Wed Jul 24 08:56:03 2024 00:10:25.765 read: IOPS=32, BW=127KiB/s (130kB/s)(408KiB/3203msec) 00:10:25.765 slat (nsec): min=11615, max=73436, avg=24765.73, stdev=10659.64 00:10:25.765 clat (usec): min=445, max=41458, avg=31071.32, stdev=17476.28 00:10:25.765 lat (usec): min=470, max=41486, avg=31096.17, stdev=17475.57 00:10:25.765 clat percentiles (usec): 00:10:25.765 | 1.00th=[ 449], 5.00th=[ 515], 10.00th=[ 562], 20.00th=[ 586], 00:10:25.765 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:25.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:25.765 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:25.766 | 99.99th=[41681] 00:10:25.766 bw ( KiB/s): min= 96, max= 176, per=1.24%, avg=129.33, stdev=32.95, samples=6 00:10:25.766 iops : min= 24, max= 44, avg=32.33, stdev= 8.24, samples=6 00:10:25.766 lat (usec) : 500=2.91%, 750=21.36% 00:10:25.766 lat (msec) : 50=74.76% 00:10:25.766 cpu : usr=0.00%, sys=0.16%, ctx=103, majf=0, minf=1 00:10:25.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.766 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.766 issued rwts: total=103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.766 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3694117: Wed Jul 24 08:56:03 2024 00:10:25.766 read: IOPS=1210, BW=4841KiB/s (4957kB/s)(13.7MiB/2907msec) 00:10:25.766 slat (nsec): min=4544, max=66755, avg=16346.76, stdev=8737.46 00:10:25.766 clat (usec): min=241, max=41367, avg=799.35, stdev=4252.51 00:10:25.766 lat (usec): min=247, max=41384, avg=815.70, stdev=4253.16 00:10:25.766 clat percentiles (usec): 00:10:25.766 | 1.00th=[ 265], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:10:25.766 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 347], 00:10:25.766 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 433], 00:10:25.766 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:25.766 | 99.99th=[41157] 00:10:25.766 bw ( KiB/s): min= 104, max= 9376, per=37.51%, avg=3908.80, stdev=4180.05, samples=5 00:10:25.766 iops : min= 26, max= 2344, avg=977.20, stdev=1045.01, samples=5 00:10:25.766 lat (usec) : 250=0.14%, 500=98.01%, 750=0.71% 00:10:25.766 lat (msec) : 50=1.11% 00:10:25.766 cpu : usr=1.07%, sys=2.55%, ctx=3519, majf=0, minf=1 00:10:25.766 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:25.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.766 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:25.766 issued rwts: total=3519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:25.766 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:25.766 00:10:25.766 Run status group 0 (all jobs): 00:10:25.766 READ: bw=10.2MiB/s (10.7MB/s), 99.2KiB/s-6886KiB/s (102kB/s-7052kB/s), io=37.7MiB (39.6MB), run=2907-3709msec 00:10:25.766 00:10:25.766 Disk stats (read/write): 00:10:25.766 nvme0n1: ios=5988/0, merge=0/0, ticks=3999/0, in_queue=3999, util=98.94% 00:10:25.766 nvme0n2: ios=89/0, merge=0/0, ticks=3527/0, in_queue=3527, util=95.15% 00:10:25.766 nvme0n3: ios=147/0, merge=0/0, ticks=3168/0, in_queue=3168, util=97.28% 00:10:25.766 nvme0n4: ios=3416/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.74% 00:10:25.766 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:25.766 08:56:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:26.024 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.024 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:26.282 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.282 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:26.539 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:26.539 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:26.797 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:26.797 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3693994 00:10:26.797 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:26.797 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.055 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.055 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:10:27.055 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:27.055 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.055 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:27.055 08:56:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:27.055 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:10:27.055 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:27.055 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:27.055 nvmf hotplug test: fio failed as expected 00:10:27.055 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:27.313 rmmod nvme_tcp 00:10:27.313 rmmod nvme_fabrics 00:10:27.313 rmmod nvme_keyring 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3692073 ']' 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3692073 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3692073 ']' 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3692073 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3692073 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3692073' 00:10:27.313 killing process with pid 3692073 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3692073 00:10:27.313 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3692073 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:27.572 08:56:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:30.106 00:10:30.106 real 0m23.281s 00:10:30.106 user 1m22.164s 00:10:30.106 sys 0m6.408s 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.106 ************************************ 00:10:30.106 END TEST nvmf_fio_target 00:10:30.106 ************************************ 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.106 ************************************ 00:10:30.106 START TEST nvmf_bdevio 00:10:30.106 ************************************ 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:30.106 * Looking for test storage... 00:10:30.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:30.106 08:56:07 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:10:32.010 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:32.011 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:32.011 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:32.011 Found net devices under 0000:09:00.0: cvl_0_0 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:32.011 Found net devices under 0000:09:00.1: cvl_0_1 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:32.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:10:32.011 00:10:32.011 --- 10.0.0.2 ping statistics --- 00:10:32.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.011 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:32.011 00:10:32.011 --- 10.0.0.1 ping statistics --- 00:10:32.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.011 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3696716 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3696716 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3696716 ']' 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:32.011 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.012 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:32.012 08:56:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.012 [2024-07-24 08:56:09.955903] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:10:32.012 [2024-07-24 08:56:09.955986] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.012 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.012 [2024-07-24 08:56:09.992936] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:32.012 [2024-07-24 08:56:10.023348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.012 [2024-07-24 08:56:10.115584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.012 [2024-07-24 08:56:10.115655] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.012 [2024-07-24 08:56:10.115668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.012 [2024-07-24 08:56:10.115693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.012 [2024-07-24 08:56:10.115703] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.012 [2024-07-24 08:56:10.115792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:10:32.012 [2024-07-24 08:56:10.115856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:10:32.012 [2024-07-24 08:56:10.115907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:10:32.012 [2024-07-24 08:56:10.115909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 [2024-07-24 08:56:10.280698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 Malloc0 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.270 [2024-07-24 08:56:10.334438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:32.270 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:32.270 { 00:10:32.270 "params": { 00:10:32.270 "name": "Nvme$subsystem", 00:10:32.270 "trtype": "$TEST_TRANSPORT", 00:10:32.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.270 "adrfam": "ipv4", 00:10:32.270 "trsvcid": "$NVMF_PORT", 00:10:32.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.271 "hdgst": ${hdgst:-false}, 00:10:32.271 "ddgst": ${ddgst:-false} 00:10:32.271 }, 00:10:32.271 "method": "bdev_nvme_attach_controller" 00:10:32.271 } 00:10:32.271 EOF 00:10:32.271 )") 00:10:32.271 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:10:32.271 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:10:32.271 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:10:32.271 08:56:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:32.271 "params": { 00:10:32.271 "name": "Nvme1", 00:10:32.271 "trtype": "tcp", 00:10:32.271 "traddr": "10.0.0.2", 00:10:32.271 "adrfam": "ipv4", 00:10:32.271 "trsvcid": "4420", 00:10:32.271 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.271 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.271 "hdgst": false, 00:10:32.271 "ddgst": false 00:10:32.271 }, 00:10:32.271 "method": "bdev_nvme_attach_controller" 00:10:32.271 }' 00:10:32.271 [2024-07-24 08:56:10.382929] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:10:32.271 [2024-07-24 08:56:10.382994] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3696856 ] 00:10:32.529 EAL: No free 2048 kB hugepages reported on node 1 00:10:32.529 [2024-07-24 08:56:10.414537] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:32.529 [2024-07-24 08:56:10.443644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:32.529 [2024-07-24 08:56:10.534259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.529 [2024-07-24 08:56:10.534308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.529 [2024-07-24 08:56:10.534311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.787 I/O targets: 00:10:32.787 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:32.787 00:10:32.787 00:10:32.787 CUnit - A unit testing framework for C - Version 2.1-3 00:10:32.787 http://cunit.sourceforge.net/ 00:10:32.787 00:10:32.787 00:10:32.787 Suite: bdevio tests on: Nvme1n1 00:10:32.787 Test: blockdev write read block ...passed 00:10:33.045 Test: blockdev write zeroes read block ...passed 00:10:33.045 Test: blockdev write zeroes read no split ...passed 00:10:33.045 Test: blockdev write zeroes read split ...passed 00:10:33.045 Test: blockdev write zeroes read split partial ...passed 00:10:33.045 Test: blockdev reset ...[2024-07-24 08:56:10.997790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:10:33.045 [2024-07-24 08:56:10.997896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c0940 (9): Bad file descriptor 00:10:33.045 [2024-07-24 08:56:11.146962] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:33.045 passed 00:10:33.303 Test: blockdev write read 8 blocks ...passed 00:10:33.303 Test: blockdev write read size > 128k ...passed 00:10:33.303 Test: blockdev write read invalid size ...passed 00:10:33.303 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:33.303 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:33.303 Test: blockdev write read max offset ...passed 00:10:33.303 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:33.303 Test: blockdev writev readv 8 blocks ...passed 00:10:33.303 Test: blockdev writev readv 30 x 1block ...passed 00:10:33.561 Test: blockdev writev readv block ...passed 00:10:33.561 Test: blockdev writev readv size > 128k ...passed 00:10:33.561 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:33.561 Test: blockdev comparev and writev ...[2024-07-24 08:56:11.441854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.441891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.441916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.441933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.442281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.442304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.442326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.442342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.442660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.442699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.442721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.442738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.443079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.443109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.443133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:33.561 [2024-07-24 08:56:11.443149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:33.561 passed 00:10:33.561 Test: blockdev nvme passthru rw ...passed 00:10:33.561 Test: blockdev nvme passthru vendor specific ...[2024-07-24 08:56:11.526392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.561 [2024-07-24 08:56:11.526419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.526593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.561 [2024-07-24 08:56:11.526616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.526784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.561 [2024-07-24 08:56:11.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:33.561 [2024-07-24 08:56:11.526971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:33.561 [2024-07-24 08:56:11.526994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:33.561 passed 00:10:33.561 Test: blockdev nvme admin passthru ...passed 00:10:33.561 Test: blockdev copy ...passed 00:10:33.561 00:10:33.561 Run Summary: Type Total Ran Passed Failed Inactive 00:10:33.561 suites 1 1 n/a 0 0 00:10:33.561 tests 23 23 23 0 0 00:10:33.561 asserts 152 152 152 0 n/a 00:10:33.561 00:10:33.561 Elapsed time = 1.481 seconds 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:33.819 rmmod nvme_tcp 00:10:33.819 rmmod nvme_fabrics 00:10:33.819 rmmod nvme_keyring 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3696716 ']' 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3696716 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3696716 ']' 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3696716 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3696716 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3696716' 00:10:33.819 killing process with pid 3696716 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3696716 00:10:33.819 08:56:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3696716 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.077 08:56:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:36.612 00:10:36.612 real 0m6.528s 00:10:36.612 user 0m11.504s 00:10:36.612 sys 0m2.122s 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 ************************************ 00:10:36.612 END TEST nvmf_bdevio 00:10:36.612 ************************************ 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:36.612 00:10:36.612 real 3m50.761s 00:10:36.612 user 9m45.790s 00:10:36.612 sys 1m12.548s 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 ************************************ 00:10:36.612 END TEST nvmf_target_core 00:10:36.612 ************************************ 00:10:36.612 08:56:14 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.612 08:56:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:36.612 08:56:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.612 08:56:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:36.612 ************************************ 00:10:36.612 START TEST nvmf_target_extra 00:10:36.612 ************************************ 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:36.612 * Looking for test storage... 00:10:36.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.612 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:36.613 ************************************ 00:10:36.613 START TEST nvmf_example 00:10:36.613 ************************************ 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:36.613 * Looking for test storage... 00:10:36.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:36.613 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:10:36.614 08:56:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.516 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:38.517 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:38.517 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:38.517 Found net devices under 0000:09:00.0: cvl_0_0 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:38.517 Found net devices under 0000:09:00.1: cvl_0_1 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:38.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:10:38.517 00:10:38.517 --- 10.0.0.2 ping statistics --- 00:10:38.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.517 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:10:38.517 00:10:38.517 --- 10.0.0.1 ping statistics --- 00:10:38.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.517 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.517 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3698981 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3698981 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3698981 ']' 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:38.518 08:56:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.776 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:39.708 08:56:17 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:39.708 EAL: No free 2048 kB hugepages reported on node 1 00:10:51.950 Initializing NVMe Controllers 00:10:51.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:51.950 Initialization complete. Launching workers. 00:10:51.950 ======================================================== 00:10:51.950 Latency(us) 00:10:51.950 Device Information : IOPS MiB/s Average min max 00:10:51.950 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13616.21 53.19 4699.97 938.92 15379.09 00:10:51.950 ======================================================== 00:10:51.950 Total : 13616.21 53.19 4699.97 938.92 15379.09 00:10:51.950 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:51.950 rmmod nvme_tcp 00:10:51.950 rmmod nvme_fabrics 00:10:51.950 rmmod nvme_keyring 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3698981 ']' 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3698981 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3698981 ']' 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3698981 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3698981 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3698981' 00:10:51.950 killing process with pid 3698981 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 3698981 00:10:51.950 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 3698981 00:10:51.950 nvmf threads initialize successfully 00:10:51.950 bdev subsystem init successfully 00:10:51.950 created a nvmf target service 00:10:51.950 create targets's poll groups done 00:10:51.950 all subsystems of target started 00:10:51.950 nvmf target is running 00:10:51.950 all subsystems of target stopped 00:10:51.950 destroy targets's poll groups done 00:10:51.950 destroyed the nvmf target service 00:10:51.950 bdev subsystem finish successfully 00:10:51.950 nvmf threads destroy successfully 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.950 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.209 00:10:52.209 real 0m15.914s 00:10:52.209 user 0m44.305s 00:10:52.209 sys 0m3.744s 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.209 ************************************ 00:10:52.209 END TEST nvmf_example 00:10:52.209 ************************************ 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:52.209 08:56:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.471 ************************************ 00:10:52.471 START TEST nvmf_filesystem 00:10:52.471 ************************************ 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.471 * Looking for test storage... 00:10:52.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:52.471 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:52.472 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:52.472 #define SPDK_CONFIG_H 00:10:52.472 #define SPDK_CONFIG_APPS 1 00:10:52.472 #define SPDK_CONFIG_ARCH native 00:10:52.472 #undef SPDK_CONFIG_ASAN 00:10:52.472 #undef SPDK_CONFIG_AVAHI 00:10:52.472 #undef SPDK_CONFIG_CET 00:10:52.472 #define SPDK_CONFIG_COVERAGE 1 00:10:52.472 #define SPDK_CONFIG_CROSS_PREFIX 00:10:52.472 #undef SPDK_CONFIG_CRYPTO 00:10:52.472 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:52.472 #undef SPDK_CONFIG_CUSTOMOCF 00:10:52.472 #undef SPDK_CONFIG_DAOS 00:10:52.472 #define SPDK_CONFIG_DAOS_DIR 00:10:52.472 #define SPDK_CONFIG_DEBUG 1 00:10:52.472 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:52.473 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:52.473 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:10:52.473 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.473 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:52.473 #undef SPDK_CONFIG_DPDK_UADK 00:10:52.473 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:52.473 #define SPDK_CONFIG_EXAMPLES 1 00:10:52.473 #undef SPDK_CONFIG_FC 00:10:52.473 #define SPDK_CONFIG_FC_PATH 00:10:52.473 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:52.473 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:52.473 #undef SPDK_CONFIG_FUSE 00:10:52.473 #undef SPDK_CONFIG_FUZZER 00:10:52.473 #define SPDK_CONFIG_FUZZER_LIB 00:10:52.473 #undef SPDK_CONFIG_GOLANG 00:10:52.473 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:52.473 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:52.473 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:52.473 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:52.473 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:52.473 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:52.473 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:52.473 #define SPDK_CONFIG_IDXD 1 00:10:52.473 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:52.473 #undef SPDK_CONFIG_IPSEC_MB 00:10:52.473 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:52.473 #define SPDK_CONFIG_ISAL 1 00:10:52.473 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:52.473 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:52.473 #define SPDK_CONFIG_LIBDIR 00:10:52.473 #undef SPDK_CONFIG_LTO 00:10:52.473 #define SPDK_CONFIG_MAX_LCORES 128 00:10:52.473 #define SPDK_CONFIG_NVME_CUSE 1 00:10:52.473 #undef SPDK_CONFIG_OCF 00:10:52.473 #define SPDK_CONFIG_OCF_PATH 00:10:52.473 #define SPDK_CONFIG_OPENSSL_PATH 00:10:52.473 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:52.473 #define SPDK_CONFIG_PGO_DIR 00:10:52.473 #undef SPDK_CONFIG_PGO_USE 00:10:52.473 #define SPDK_CONFIG_PREFIX /usr/local 00:10:52.473 #undef SPDK_CONFIG_RAID5F 00:10:52.473 #undef SPDK_CONFIG_RBD 00:10:52.473 #define SPDK_CONFIG_RDMA 1 00:10:52.473 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:52.473 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:52.473 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:52.473 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:52.473 #define SPDK_CONFIG_SHARED 1 00:10:52.473 #undef SPDK_CONFIG_SMA 00:10:52.473 #define SPDK_CONFIG_TESTS 1 00:10:52.473 #undef SPDK_CONFIG_TSAN 00:10:52.473 #define SPDK_CONFIG_UBLK 1 00:10:52.473 #define SPDK_CONFIG_UBSAN 1 00:10:52.473 #undef SPDK_CONFIG_UNIT_TESTS 00:10:52.473 #undef SPDK_CONFIG_URING 00:10:52.473 #define SPDK_CONFIG_URING_PATH 00:10:52.473 #undef SPDK_CONFIG_URING_ZNS 00:10:52.473 #undef SPDK_CONFIG_USDT 00:10:52.473 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:52.473 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:52.473 #define SPDK_CONFIG_VFIO_USER 1 00:10:52.473 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:52.473 #define SPDK_CONFIG_VHOST 1 00:10:52.473 #define SPDK_CONFIG_VIRTIO 1 00:10:52.473 #undef SPDK_CONFIG_VTUNE 00:10:52.473 #define SPDK_CONFIG_VTUNE_DIR 00:10:52.473 #define SPDK_CONFIG_WERROR 1 00:10:52.473 #define SPDK_CONFIG_WPDK_DIR 00:10:52.473 #undef SPDK_CONFIG_XNVME 00:10:52.473 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:52.473 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : main 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:52.474 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3700827 ]] 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3700827 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.n7Ot80 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:52.475 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.n7Ot80/tests/target /tmp/spdk.n7Ot80 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=952066048 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4332363776 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=49743437824 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=12251271168 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30986096640 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=11255808 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376530944 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22413312 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996459520 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=897024 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:52.476 * Looking for test storage... 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=49743437824 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=14465863680 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:52.476 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.477 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.414 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:54.415 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:54.415 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:54.415 Found net devices under 0000:09:00.0: cvl_0_0 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:54.415 Found net devices under 0000:09:00.1: cvl_0_1 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:54.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:10:54.415 00:10:54.415 --- 10.0.0.2 ping statistics --- 00:10:54.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.415 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:10:54.415 00:10:54.415 --- 10.0.0.1 ping statistics --- 00:10:54.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.415 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:54.415 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:54.416 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:54.416 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 ************************************ 00:10:54.674 START TEST nvmf_filesystem_no_in_capsule 00:10:54.674 ************************************ 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3702410 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3702410 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3702410 ']' 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:54.674 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.674 [2024-07-24 08:56:32.591969] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:10:54.674 [2024-07-24 08:56:32.592040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.674 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.674 [2024-07-24 08:56:32.631563] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:54.674 [2024-07-24 08:56:32.658526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.674 [2024-07-24 08:56:32.747905] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.674 [2024-07-24 08:56:32.747948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.674 [2024-07-24 08:56:32.747982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.674 [2024-07-24 08:56:32.747993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.674 [2024-07-24 08:56:32.748003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.674 [2024-07-24 08:56:32.748079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.675 [2024-07-24 08:56:32.748145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.675 [2024-07-24 08:56:32.748211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.675 [2024-07-24 08:56:32.748213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.933 [2024-07-24 08:56:32.905661] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.933 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.192 Malloc1 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.192 [2024-07-24 08:56:33.097124] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:10:55.192 { 00:10:55.192 "name": "Malloc1", 00:10:55.192 "aliases": [ 00:10:55.192 "8d731263-906c-422d-9293-c3a8e15d328a" 00:10:55.192 ], 00:10:55.192 "product_name": "Malloc disk", 00:10:55.192 "block_size": 512, 00:10:55.192 "num_blocks": 1048576, 00:10:55.192 "uuid": "8d731263-906c-422d-9293-c3a8e15d328a", 00:10:55.192 "assigned_rate_limits": { 00:10:55.192 "rw_ios_per_sec": 0, 00:10:55.192 "rw_mbytes_per_sec": 0, 00:10:55.192 "r_mbytes_per_sec": 0, 00:10:55.192 "w_mbytes_per_sec": 0 00:10:55.192 }, 00:10:55.192 "claimed": true, 00:10:55.192 "claim_type": "exclusive_write", 00:10:55.192 "zoned": false, 00:10:55.192 "supported_io_types": { 00:10:55.192 "read": true, 00:10:55.192 "write": true, 00:10:55.192 "unmap": true, 00:10:55.192 "flush": true, 00:10:55.192 "reset": true, 00:10:55.192 "nvme_admin": false, 00:10:55.192 "nvme_io": false, 00:10:55.192 "nvme_io_md": false, 00:10:55.192 "write_zeroes": true, 00:10:55.192 "zcopy": true, 00:10:55.192 "get_zone_info": false, 00:10:55.192 "zone_management": false, 00:10:55.192 "zone_append": false, 00:10:55.192 "compare": false, 00:10:55.192 "compare_and_write": false, 00:10:55.192 "abort": true, 00:10:55.192 "seek_hole": false, 00:10:55.192 "seek_data": false, 00:10:55.192 "copy": true, 00:10:55.192 "nvme_iov_md": false 00:10:55.192 }, 00:10:55.192 "memory_domains": [ 00:10:55.192 { 00:10:55.192 "dma_device_id": "system", 00:10:55.192 "dma_device_type": 1 00:10:55.192 }, 00:10:55.192 { 00:10:55.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.192 "dma_device_type": 2 00:10:55.192 } 00:10:55.192 ], 00:10:55.192 "driver_specific": {} 00:10:55.192 } 00:10:55.192 ]' 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:55.192 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.755 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.755 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:10:55.755 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.755 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:55.755 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:58.281 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:58.281 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:58.539 08:56:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:59.471 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:59.471 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:59.471 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:59.471 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:59.471 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.729 ************************************ 00:10:59.729 START TEST filesystem_ext4 00:10:59.729 ************************************ 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:59.729 08:56:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:59.729 mke2fs 1.46.5 (30-Dec-2021) 00:10:59.729 Discarding device blocks: 0/522240 done 00:10:59.729 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:59.729 Filesystem UUID: 9047af89-8c3b-4b88-8abf-4b7f985efad6 00:10:59.729 Superblock backups stored on blocks: 00:10:59.729 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:59.729 00:10:59.729 Allocating group tables: 0/64 done 00:10:59.729 Writing inode tables: 0/64 done 00:10:59.986 Creating journal (8192 blocks): done 00:11:00.807 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:00.807 00:11:00.807 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:00.807 08:56:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3702410 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:01.371 00:11:01.371 real 0m1.803s 00:11:01.371 user 0m0.014s 00:11:01.371 sys 0m0.059s 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:01.371 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:01.371 ************************************ 00:11:01.371 END TEST filesystem_ext4 00:11:01.371 ************************************ 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:01.372 ************************************ 00:11:01.372 START TEST filesystem_btrfs 00:11:01.372 ************************************ 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:01.372 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:01.630 btrfs-progs v6.6.2 00:11:01.630 See https://btrfs.readthedocs.io for more information. 00:11:01.630 00:11:01.630 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:01.630 NOTE: several default settings have changed in version 5.15, please make sure 00:11:01.630 this does not affect your deployments: 00:11:01.630 - DUP for metadata (-m dup) 00:11:01.630 - enabled no-holes (-O no-holes) 00:11:01.630 - enabled free-space-tree (-R free-space-tree) 00:11:01.630 00:11:01.630 Label: (null) 00:11:01.630 UUID: bc7f3b5e-5d21-46b4-9ec3-d93fd8dfc54d 00:11:01.630 Node size: 16384 00:11:01.630 Sector size: 4096 00:11:01.630 Filesystem size: 510.00MiB 00:11:01.630 Block group profiles: 00:11:01.630 Data: single 8.00MiB 00:11:01.630 Metadata: DUP 32.00MiB 00:11:01.630 System: DUP 8.00MiB 00:11:01.630 SSD detected: yes 00:11:01.630 Zoned device: no 00:11:01.630 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:01.630 Runtime features: free-space-tree 00:11:01.630 Checksum: crc32c 00:11:01.630 Number of devices: 1 00:11:01.630 Devices: 00:11:01.630 ID SIZE PATH 00:11:01.630 1 510.00MiB /dev/nvme0n1p1 00:11:01.630 00:11:01.630 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:01.630 08:56:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3702410 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.564 00:11:02.564 real 0m0.952s 00:11:02.564 user 0m0.014s 00:11:02.564 sys 0m0.115s 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:02.564 ************************************ 00:11:02.564 END TEST filesystem_btrfs 00:11:02.564 ************************************ 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:02.564 ************************************ 00:11:02.564 START TEST filesystem_xfs 00:11:02.564 ************************************ 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:02.564 08:56:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:02.564 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:02.564 = sectsz=512 attr=2, projid32bit=1 00:11:02.564 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:02.564 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:02.564 data = bsize=4096 blocks=130560, imaxpct=25 00:11:02.564 = sunit=0 swidth=0 blks 00:11:02.565 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:02.565 log =internal log bsize=4096 blocks=16384, version=2 00:11:02.565 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:02.565 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:03.496 Discarding blocks...Done. 00:11:03.496 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:03.496 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.019 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3702410 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.019 00:11:06.019 real 0m3.653s 00:11:06.019 user 0m0.017s 00:11:06.019 sys 0m0.059s 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:06.019 ************************************ 00:11:06.019 END TEST filesystem_xfs 00:11:06.019 ************************************ 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:06.019 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:06.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3702410 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3702410 ']' 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3702410 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3702410 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3702410' 00:11:06.276 killing process with pid 3702410 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3702410 00:11:06.276 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3702410 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:06.839 00:11:06.839 real 0m12.155s 00:11:06.839 user 0m46.540s 00:11:06.839 sys 0m1.840s 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.839 ************************************ 00:11:06.839 END TEST nvmf_filesystem_no_in_capsule 00:11:06.839 ************************************ 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:06.839 ************************************ 00:11:06.839 START TEST nvmf_filesystem_in_capsule 00:11:06.839 ************************************ 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3704018 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3704018 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3704018 ']' 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:06.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:06.839 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.840 [2024-07-24 08:56:44.800126] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:11:06.840 [2024-07-24 08:56:44.800219] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:06.840 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.840 [2024-07-24 08:56:44.837323] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:06.840 [2024-07-24 08:56:44.869872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.097 [2024-07-24 08:56:44.961215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.097 [2024-07-24 08:56:44.961268] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.097 [2024-07-24 08:56:44.961283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.097 [2024-07-24 08:56:44.961304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.097 [2024-07-24 08:56:44.961316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.097 [2024-07-24 08:56:44.961373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.097 [2024-07-24 08:56:44.961447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.097 [2024-07-24 08:56:44.961538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.097 [2024-07-24 08:56:44.961540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.097 [2024-07-24 08:56:45.117593] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.097 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.355 Malloc1 00:11:07.355 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.355 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.355 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.356 [2024-07-24 08:56:45.297417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:11:07.356 { 00:11:07.356 "name": "Malloc1", 00:11:07.356 "aliases": [ 00:11:07.356 "d93bf821-a4ed-4297-b193-6b1cbf018235" 00:11:07.356 ], 00:11:07.356 "product_name": "Malloc disk", 00:11:07.356 "block_size": 512, 00:11:07.356 "num_blocks": 1048576, 00:11:07.356 "uuid": "d93bf821-a4ed-4297-b193-6b1cbf018235", 00:11:07.356 "assigned_rate_limits": { 00:11:07.356 "rw_ios_per_sec": 0, 00:11:07.356 "rw_mbytes_per_sec": 0, 00:11:07.356 "r_mbytes_per_sec": 0, 00:11:07.356 "w_mbytes_per_sec": 0 00:11:07.356 }, 00:11:07.356 "claimed": true, 00:11:07.356 "claim_type": "exclusive_write", 00:11:07.356 "zoned": false, 00:11:07.356 "supported_io_types": { 00:11:07.356 "read": true, 00:11:07.356 "write": true, 00:11:07.356 "unmap": true, 00:11:07.356 "flush": true, 00:11:07.356 "reset": true, 00:11:07.356 "nvme_admin": false, 00:11:07.356 "nvme_io": false, 00:11:07.356 "nvme_io_md": false, 00:11:07.356 "write_zeroes": true, 00:11:07.356 "zcopy": true, 00:11:07.356 "get_zone_info": false, 00:11:07.356 "zone_management": false, 00:11:07.356 "zone_append": false, 00:11:07.356 "compare": false, 00:11:07.356 "compare_and_write": false, 00:11:07.356 "abort": true, 00:11:07.356 "seek_hole": false, 00:11:07.356 "seek_data": false, 00:11:07.356 "copy": true, 00:11:07.356 "nvme_iov_md": false 00:11:07.356 }, 00:11:07.356 "memory_domains": [ 00:11:07.356 { 00:11:07.356 "dma_device_id": "system", 00:11:07.356 "dma_device_type": 1 00:11:07.356 }, 00:11:07.356 { 00:11:07.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.356 "dma_device_type": 2 00:11:07.356 } 00:11:07.356 ], 00:11:07.356 "driver_specific": {} 00:11:07.356 } 00:11:07.356 ]' 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:07.356 08:56:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.288 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.288 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:11:08.288 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.288 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:08.288 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:10.184 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:11.115 08:56:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.047 ************************************ 00:11:12.047 START TEST filesystem_in_capsule_ext4 00:11:12.047 ************************************ 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:12.047 08:56:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:12.047 mke2fs 1.46.5 (30-Dec-2021) 00:11:12.047 Discarding device blocks: 0/522240 done 00:11:12.047 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:12.047 Filesystem UUID: 1637d13a-539e-4733-a866-605111bdb77b 00:11:12.047 Superblock backups stored on blocks: 00:11:12.047 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:12.047 00:11:12.047 Allocating group tables: 0/64 done 00:11:12.047 Writing inode tables: 0/64 done 00:11:12.612 Creating journal (8192 blocks): done 00:11:12.612 Writing superblocks and filesystem accounting information: 0/64 done 00:11:12.612 00:11:12.612 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:12.612 08:56:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.177 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.177 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:13.177 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.177 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:13.177 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:13.177 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3704018 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.435 00:11:13.435 real 0m1.408s 00:11:13.435 user 0m0.008s 00:11:13.435 sys 0m0.066s 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:13.435 ************************************ 00:11:13.435 END TEST filesystem_in_capsule_ext4 00:11:13.435 ************************************ 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.435 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.436 ************************************ 00:11:13.436 START TEST filesystem_in_capsule_btrfs 00:11:13.436 ************************************ 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:13.436 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:13.702 btrfs-progs v6.6.2 00:11:13.702 See https://btrfs.readthedocs.io for more information. 00:11:13.702 00:11:13.702 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:13.702 NOTE: several default settings have changed in version 5.15, please make sure 00:11:13.702 this does not affect your deployments: 00:11:13.702 - DUP for metadata (-m dup) 00:11:13.702 - enabled no-holes (-O no-holes) 00:11:13.702 - enabled free-space-tree (-R free-space-tree) 00:11:13.702 00:11:13.702 Label: (null) 00:11:13.702 UUID: d4565c30-2949-4294-bd70-6973921aa983 00:11:13.702 Node size: 16384 00:11:13.702 Sector size: 4096 00:11:13.702 Filesystem size: 510.00MiB 00:11:13.702 Block group profiles: 00:11:13.702 Data: single 8.00MiB 00:11:13.702 Metadata: DUP 32.00MiB 00:11:13.702 System: DUP 8.00MiB 00:11:13.702 SSD detected: yes 00:11:13.702 Zoned device: no 00:11:13.702 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:13.702 Runtime features: free-space-tree 00:11:13.702 Checksum: crc32c 00:11:13.702 Number of devices: 1 00:11:13.702 Devices: 00:11:13.702 ID SIZE PATH 00:11:13.702 1 510.00MiB /dev/nvme0n1p1 00:11:13.702 00:11:13.702 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:13.702 08:56:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3704018 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.306 00:11:14.306 real 0m0.916s 00:11:14.306 user 0m0.023s 00:11:14.306 sys 0m0.108s 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:14.306 ************************************ 00:11:14.306 END TEST filesystem_in_capsule_btrfs 00:11:14.306 ************************************ 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.306 ************************************ 00:11:14.306 START TEST filesystem_in_capsule_xfs 00:11:14.306 ************************************ 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:14.306 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:14.306 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:14.306 = sectsz=512 attr=2, projid32bit=1 00:11:14.306 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:14.306 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:14.306 data = bsize=4096 blocks=130560, imaxpct=25 00:11:14.306 = sunit=0 swidth=0 blks 00:11:14.306 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:14.306 log =internal log bsize=4096 blocks=16384, version=2 00:11:14.306 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:14.306 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:15.679 Discarding blocks...Done. 00:11:15.679 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:15.679 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:17.051 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3704018 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:17.309 00:11:17.309 real 0m2.996s 00:11:17.309 user 0m0.022s 00:11:17.309 sys 0m0.051s 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:17.309 ************************************ 00:11:17.309 END TEST filesystem_in_capsule_xfs 00:11:17.309 ************************************ 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:17.309 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3704018 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3704018 ']' 00:11:17.567 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3704018 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3704018 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3704018' 00:11:17.568 killing process with pid 3704018 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3704018 00:11:17.568 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3704018 00:11:18.133 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:18.133 00:11:18.133 real 0m11.210s 00:11:18.133 user 0m42.910s 00:11:18.133 sys 0m1.734s 00:11:18.133 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:18.134 ************************************ 00:11:18.134 END TEST nvmf_filesystem_in_capsule 00:11:18.134 ************************************ 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:18.134 08:56:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:18.134 rmmod nvme_tcp 00:11:18.134 rmmod nvme_fabrics 00:11:18.134 rmmod nvme_keyring 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.134 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:20.036 00:11:20.036 real 0m27.747s 00:11:20.036 user 1m30.277s 00:11:20.036 sys 0m5.123s 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:20.036 ************************************ 00:11:20.036 END TEST nvmf_filesystem 00:11:20.036 ************************************ 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:20.036 ************************************ 00:11:20.036 START TEST nvmf_target_discovery 00:11:20.036 ************************************ 00:11:20.036 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:20.295 * Looking for test storage... 00:11:20.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:20.295 08:56:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.198 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:22.198 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:22.198 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:22.199 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:22.199 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:22.199 Found net devices under 0000:09:00.0: cvl_0_0 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:22.199 Found net devices under 0000:09:00.1: cvl_0_1 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:22.199 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:22.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:11:22.457 00:11:22.457 --- 10.0.0.2 ping statistics --- 00:11:22.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.457 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:22.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:11:22.457 00:11:22.457 --- 10.0.0.1 ping statistics --- 00:11:22.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.457 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:22.457 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3707513 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3707513 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3707513 ']' 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.458 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.458 [2024-07-24 08:57:00.457805] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:11:22.458 [2024-07-24 08:57:00.457880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.458 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.458 [2024-07-24 08:57:00.495129] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:22.458 [2024-07-24 08:57:00.523622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:22.716 [2024-07-24 08:57:00.612596] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:22.716 [2024-07-24 08:57:00.612664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:22.716 [2024-07-24 08:57:00.612692] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:22.716 [2024-07-24 08:57:00.612703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:22.716 [2024-07-24 08:57:00.612713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:22.716 [2024-07-24 08:57:00.612794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:22.716 [2024-07-24 08:57:00.612859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:22.716 [2024-07-24 08:57:00.612909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:22.716 [2024-07-24 08:57:00.612911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 [2024-07-24 08:57:00.773659] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 Null1 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 [2024-07-24 08:57:00.813967] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.716 Null2 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.716 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 Null3 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:22.974 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 Null4 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.975 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:11:22.975 00:11:22.975 Discovery Log Number of Records 6, Generation counter 6 00:11:22.975 =====Discovery Log Entry 0====== 00:11:22.975 trtype: tcp 00:11:22.975 adrfam: ipv4 00:11:22.975 subtype: current discovery subsystem 00:11:22.975 treq: not required 00:11:22.975 portid: 0 00:11:22.975 trsvcid: 4420 00:11:22.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:22.975 traddr: 10.0.0.2 00:11:22.975 eflags: explicit discovery connections, duplicate discovery information 00:11:22.975 sectype: none 00:11:22.975 =====Discovery Log Entry 1====== 00:11:22.975 trtype: tcp 00:11:22.975 adrfam: ipv4 00:11:22.975 subtype: nvme subsystem 00:11:22.975 treq: not required 00:11:22.975 portid: 0 00:11:22.975 trsvcid: 4420 00:11:22.975 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:22.975 traddr: 10.0.0.2 00:11:22.975 eflags: none 00:11:22.975 sectype: none 00:11:22.975 =====Discovery Log Entry 2====== 00:11:22.975 trtype: tcp 00:11:22.975 adrfam: ipv4 00:11:22.975 subtype: nvme subsystem 00:11:22.975 treq: not required 00:11:22.975 portid: 0 00:11:22.975 trsvcid: 4420 00:11:22.975 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:22.975 traddr: 10.0.0.2 00:11:22.975 eflags: none 00:11:22.975 sectype: none 00:11:22.975 =====Discovery Log Entry 3====== 00:11:22.975 trtype: tcp 00:11:22.975 adrfam: ipv4 00:11:22.975 subtype: nvme subsystem 00:11:22.975 treq: not required 00:11:22.975 portid: 0 00:11:22.975 trsvcid: 4420 00:11:22.975 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:22.975 traddr: 10.0.0.2 00:11:22.975 eflags: none 00:11:22.975 sectype: none 00:11:22.975 =====Discovery Log Entry 4====== 00:11:22.975 trtype: tcp 00:11:22.975 adrfam: ipv4 00:11:22.975 subtype: nvme subsystem 00:11:22.975 treq: not required 00:11:22.975 portid: 0 00:11:22.975 trsvcid: 4420 00:11:22.975 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:22.975 traddr: 10.0.0.2 00:11:22.975 eflags: none 00:11:22.975 sectype: none 00:11:22.975 =====Discovery Log Entry 5====== 00:11:22.975 trtype: tcp 00:11:22.975 adrfam: ipv4 00:11:22.975 subtype: discovery subsystem referral 00:11:22.975 treq: not required 00:11:22.975 portid: 0 00:11:22.975 trsvcid: 4430 00:11:22.975 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:22.975 traddr: 10.0.0.2 00:11:22.975 eflags: none 00:11:22.975 sectype: none 00:11:22.975 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:22.975 Perform nvmf subsystem discovery via RPC 00:11:22.975 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:22.975 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.975 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.975 [ 00:11:22.975 { 00:11:22.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:22.975 "subtype": "Discovery", 00:11:22.975 "listen_addresses": [ 00:11:22.975 { 00:11:22.975 "trtype": "TCP", 00:11:22.975 "adrfam": "IPv4", 00:11:22.975 "traddr": "10.0.0.2", 00:11:22.975 "trsvcid": "4420" 00:11:22.975 } 00:11:22.975 ], 00:11:22.975 "allow_any_host": true, 00:11:22.975 "hosts": [] 00:11:22.975 }, 00:11:22.975 { 00:11:22.975 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:22.975 "subtype": "NVMe", 00:11:22.975 "listen_addresses": [ 00:11:22.975 { 00:11:22.975 "trtype": "TCP", 00:11:22.975 "adrfam": "IPv4", 00:11:22.975 "traddr": "10.0.0.2", 00:11:22.975 "trsvcid": "4420" 00:11:22.975 } 00:11:22.975 ], 00:11:22.975 "allow_any_host": true, 00:11:22.975 "hosts": [], 00:11:22.975 "serial_number": "SPDK00000000000001", 00:11:22.975 "model_number": "SPDK bdev Controller", 00:11:22.975 "max_namespaces": 32, 00:11:22.975 "min_cntlid": 1, 00:11:22.975 "max_cntlid": 65519, 00:11:22.975 "namespaces": [ 00:11:22.975 { 00:11:22.975 "nsid": 1, 00:11:22.975 "bdev_name": "Null1", 00:11:22.975 "name": "Null1", 00:11:22.975 "nguid": "FB9DE424B7C0442D8C7428A2A2C80FFC", 00:11:22.975 "uuid": "fb9de424-b7c0-442d-8c74-28a2a2c80ffc" 00:11:22.975 } 00:11:22.975 ] 00:11:22.975 }, 00:11:22.975 { 00:11:22.975 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:22.975 "subtype": "NVMe", 00:11:22.975 "listen_addresses": [ 00:11:22.975 { 00:11:22.975 "trtype": "TCP", 00:11:22.975 "adrfam": "IPv4", 00:11:22.975 "traddr": "10.0.0.2", 00:11:22.975 "trsvcid": "4420" 00:11:22.975 } 00:11:22.975 ], 00:11:22.975 "allow_any_host": true, 00:11:22.975 "hosts": [], 00:11:22.975 "serial_number": "SPDK00000000000002", 00:11:22.975 "model_number": "SPDK bdev Controller", 00:11:22.975 "max_namespaces": 32, 00:11:22.975 "min_cntlid": 1, 00:11:22.975 "max_cntlid": 65519, 00:11:22.975 "namespaces": [ 00:11:22.975 { 00:11:22.975 "nsid": 1, 00:11:22.975 "bdev_name": "Null2", 00:11:22.975 "name": "Null2", 00:11:22.975 "nguid": "B87B90C209F54362AAC13004F19578CE", 00:11:22.975 "uuid": "b87b90c2-09f5-4362-aac1-3004f19578ce" 00:11:22.975 } 00:11:22.975 ] 00:11:22.975 }, 00:11:22.975 { 00:11:22.975 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:22.975 "subtype": "NVMe", 00:11:22.975 "listen_addresses": [ 00:11:22.975 { 00:11:22.975 "trtype": "TCP", 00:11:22.975 "adrfam": "IPv4", 00:11:22.975 "traddr": "10.0.0.2", 00:11:22.975 "trsvcid": "4420" 00:11:22.975 } 00:11:22.975 ], 00:11:22.975 "allow_any_host": true, 00:11:22.975 "hosts": [], 00:11:22.975 "serial_number": "SPDK00000000000003", 00:11:22.975 "model_number": "SPDK bdev Controller", 00:11:22.975 "max_namespaces": 32, 00:11:22.975 "min_cntlid": 1, 00:11:22.975 "max_cntlid": 65519, 00:11:22.975 "namespaces": [ 00:11:22.975 { 00:11:22.975 "nsid": 1, 00:11:22.975 "bdev_name": "Null3", 00:11:22.975 "name": "Null3", 00:11:22.975 "nguid": "E6F71C06DB254C84A27EDD07FFE92FF6", 00:11:22.975 "uuid": "e6f71c06-db25-4c84-a27e-dd07ffe92ff6" 00:11:22.975 } 00:11:22.975 ] 00:11:22.975 }, 00:11:22.975 { 00:11:22.975 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:22.975 "subtype": "NVMe", 00:11:22.975 "listen_addresses": [ 00:11:22.975 { 00:11:22.975 "trtype": "TCP", 00:11:22.975 "adrfam": "IPv4", 00:11:22.975 "traddr": "10.0.0.2", 00:11:22.975 "trsvcid": "4420" 00:11:22.975 } 00:11:22.975 ], 00:11:22.975 "allow_any_host": true, 00:11:22.975 "hosts": [], 00:11:22.975 "serial_number": "SPDK00000000000004", 00:11:22.975 "model_number": "SPDK bdev Controller", 00:11:22.975 "max_namespaces": 32, 00:11:22.975 "min_cntlid": 1, 00:11:22.975 "max_cntlid": 65519, 00:11:22.976 "namespaces": [ 00:11:22.976 { 00:11:22.976 "nsid": 1, 00:11:22.976 "bdev_name": "Null4", 00:11:22.976 "name": "Null4", 00:11:22.976 "nguid": "4C939C6E3FF1445D89DD1763CEA09926", 00:11:22.976 "uuid": "4c939c6e-3ff1-445d-89dd-1763cea09926" 00:11:22.976 } 00:11:22.976 ] 00:11:22.976 } 00:11:22.976 ] 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.976 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.234 rmmod nvme_tcp 00:11:23.234 rmmod nvme_fabrics 00:11:23.234 rmmod nvme_keyring 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:23.234 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3707513 ']' 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3707513 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3707513 ']' 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3707513 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3707513 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3707513' 00:11:23.235 killing process with pid 3707513 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3707513 00:11:23.235 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3707513 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.495 08:57:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.028 00:11:26.028 real 0m5.406s 00:11:26.028 user 0m4.321s 00:11:26.028 sys 0m1.827s 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 ************************************ 00:11:26.028 END TEST nvmf_target_discovery 00:11:26.028 ************************************ 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.028 ************************************ 00:11:26.028 START TEST nvmf_referrals 00:11:26.028 ************************************ 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:26.028 * Looking for test storage... 00:11:26.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.028 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.029 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:27.933 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:27.933 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.933 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:27.934 Found net devices under 0000:09:00.0: cvl_0_0 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:27.934 Found net devices under 0000:09:00.1: cvl_0_1 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:11:27.934 00:11:27.934 --- 10.0.0.2 ping statistics --- 00:11:27.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.934 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:11:27.934 00:11:27.934 --- 10.0.0.1 ping statistics --- 00:11:27.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.934 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3709679 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3709679 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3709679 ']' 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:27.934 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:27.934 [2024-07-24 08:57:05.922436] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:11:27.934 [2024-07-24 08:57:05.922528] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.934 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.934 [2024-07-24 08:57:05.961752] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:27.934 [2024-07-24 08:57:05.994082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:28.193 [2024-07-24 08:57:06.088637] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.193 [2024-07-24 08:57:06.088699] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.193 [2024-07-24 08:57:06.088716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.193 [2024-07-24 08:57:06.088729] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.193 [2024-07-24 08:57:06.088741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.193 [2024-07-24 08:57:06.088821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.193 [2024-07-24 08:57:06.088875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.193 [2024-07-24 08:57:06.088926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.193 [2024-07-24 08:57:06.088929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 [2024-07-24 08:57:06.256835] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 [2024-07-24 08:57:06.269054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.193 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:28.451 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.708 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:28.709 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:28.966 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.224 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.480 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:29.737 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:29.737 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:29.737 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.737 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.737 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.737 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:29.738 rmmod nvme_tcp 00:11:29.738 rmmod nvme_fabrics 00:11:29.738 rmmod nvme_keyring 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3709679 ']' 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3709679 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3709679 ']' 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3709679 00:11:29.738 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3709679 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3709679' 00:11:29.997 killing process with pid 3709679 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3709679 00:11:29.997 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3709679 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.997 08:57:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:32.534 00:11:32.534 real 0m6.562s 00:11:32.534 user 0m9.393s 00:11:32.534 sys 0m2.141s 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:32.534 ************************************ 00:11:32.534 END TEST nvmf_referrals 00:11:32.534 ************************************ 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:32.534 ************************************ 00:11:32.534 START TEST nvmf_connect_disconnect 00:11:32.534 ************************************ 00:11:32.534 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:32.534 * Looking for test storage... 00:11:32.535 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:32.535 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:34.481 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.481 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:34.481 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:34.482 Found net devices under 0000:09:00.0: cvl_0_0 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:34.482 Found net devices under 0000:09:00.1: cvl_0_1 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:11:34.482 00:11:34.482 --- 10.0.0.2 ping statistics --- 00:11:34.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.482 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:11:34.482 00:11:34.482 --- 10.0.0.1 ping statistics --- 00:11:34.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.482 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3712394 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3712394 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3712394 ']' 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:34.482 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.482 [2024-07-24 08:57:12.537343] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:11:34.482 [2024-07-24 08:57:12.537423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.482 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.740 [2024-07-24 08:57:12.577422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:34.740 [2024-07-24 08:57:12.606115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.741 [2024-07-24 08:57:12.692693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.741 [2024-07-24 08:57:12.692743] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.741 [2024-07-24 08:57:12.692771] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.741 [2024-07-24 08:57:12.692782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.741 [2024-07-24 08:57:12.692791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.741 [2024-07-24 08:57:12.692937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.741 [2024-07-24 08:57:12.693004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.741 [2024-07-24 08:57:12.693053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.741 [2024-07-24 08:57:12.693056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.741 [2024-07-24 08:57:12.828246] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.741 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:34.998 [2024-07-24 08:57:12.879536] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:11:34.998 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:37.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.258 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.183 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.624 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.598 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:23.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.068 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.488 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:51.452 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.990 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.897 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:00.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.865 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:23.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.075 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.674 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.024 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:19.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.025 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:26.095 rmmod nvme_tcp 00:15:26.095 rmmod nvme_fabrics 00:15:26.095 rmmod nvme_keyring 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3712394 ']' 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3712394 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3712394 ']' 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3712394 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3712394 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3712394' 00:15:26.095 killing process with pid 3712394 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3712394 00:15:26.095 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3712394 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.356 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.265 00:15:28.265 real 3m56.102s 00:15:28.265 user 14m58.467s 00:15:28.265 sys 0m35.020s 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:15:28.265 ************************************ 00:15:28.265 END TEST nvmf_connect_disconnect 00:15:28.265 ************************************ 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:28.265 ************************************ 00:15:28.265 START TEST nvmf_multitarget 00:15:28.265 ************************************ 00:15:28.265 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:15:28.523 * Looking for test storage... 00:15:28.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.523 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:30.424 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:30.425 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:30.425 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:30.425 Found net devices under 0000:09:00.0: cvl_0_0 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:30.425 Found net devices under 0000:09:00.1: cvl_0_1 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:30.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:30.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:15:30.425 00:15:30.425 --- 10.0.0.2 ping statistics --- 00:15:30.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.425 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:30.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:30.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:15:30.425 00:15:30.425 --- 10.0.0.1 ping statistics --- 00:15:30.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:30.425 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3743430 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3743430 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3743430 ']' 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.425 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:30.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:30.426 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.426 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.426 [2024-07-24 09:01:08.471336] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:15:30.426 [2024-07-24 09:01:08.471412] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:30.426 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.426 [2024-07-24 09:01:08.508372] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:30.684 [2024-07-24 09:01:08.540753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:30.684 [2024-07-24 09:01:08.634143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:30.684 [2024-07-24 09:01:08.634203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:30.684 [2024-07-24 09:01:08.634220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:30.684 [2024-07-24 09:01:08.634234] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:30.684 [2024-07-24 09:01:08.634245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:30.684 [2024-07-24 09:01:08.634326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.684 [2024-07-24 09:01:08.634382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:30.684 [2024-07-24 09:01:08.634499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:30.684 [2024-07-24 09:01:08.634501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:30.684 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:15:30.942 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:30.942 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:30.942 "nvmf_tgt_1" 00:15:30.942 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:31.199 "nvmf_tgt_2" 00:15:31.199 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.199 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:15:31.199 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:31.199 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:31.457 true 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:31.457 true 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:31.457 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:31.457 rmmod nvme_tcp 00:15:31.457 rmmod nvme_fabrics 00:15:31.715 rmmod nvme_keyring 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3743430 ']' 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3743430 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3743430 ']' 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3743430 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3743430 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3743430' 00:15:31.715 killing process with pid 3743430 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3743430 00:15:31.715 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3743430 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:31.975 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:33.877 00:15:33.877 real 0m5.532s 00:15:33.877 user 0m6.166s 00:15:33.877 sys 0m1.790s 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 ************************************ 00:15:33.877 END TEST nvmf_multitarget 00:15:33.877 ************************************ 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:33.877 ************************************ 00:15:33.877 START TEST nvmf_rpc 00:15:33.877 ************************************ 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:15:33.877 * Looking for test storage... 00:15:33.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:33.877 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.136 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:15:34.136 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.073 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:36.073 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:36.074 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:36.074 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:36.074 Found net devices under 0000:09:00.0: cvl_0_0 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:36.074 Found net devices under 0000:09:00.1: cvl_0_1 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:36.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:15:36.074 00:15:36.074 --- 10.0.0.2 ping statistics --- 00:15:36.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.074 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:36.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:15:36.074 00:15:36.074 --- 10.0.0.1 ping statistics --- 00:15:36.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.074 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:36.074 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3745522 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3745522 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3745522 ']' 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.075 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:36.332 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.332 [2024-07-24 09:01:14.230769] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:15:36.332 [2024-07-24 09:01:14.230846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.332 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.332 [2024-07-24 09:01:14.267146] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.332 [2024-07-24 09:01:14.294049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.332 [2024-07-24 09:01:14.383110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.332 [2024-07-24 09:01:14.383156] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.332 [2024-07-24 09:01:14.383171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.332 [2024-07-24 09:01:14.383184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.332 [2024-07-24 09:01:14.383201] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.332 [2024-07-24 09:01:14.383260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.332 [2024-07-24 09:01:14.383296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.332 [2024-07-24 09:01:14.383740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.332 [2024-07-24 09:01:14.383745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.589 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:15:36.589 "tick_rate": 2700000000, 00:15:36.589 "poll_groups": [ 00:15:36.589 { 00:15:36.589 "name": "nvmf_tgt_poll_group_000", 00:15:36.589 "admin_qpairs": 0, 00:15:36.589 "io_qpairs": 0, 00:15:36.589 "current_admin_qpairs": 0, 00:15:36.589 "current_io_qpairs": 0, 00:15:36.589 "pending_bdev_io": 0, 00:15:36.589 "completed_nvme_io": 0, 00:15:36.589 "transports": [] 00:15:36.589 }, 00:15:36.589 { 00:15:36.589 "name": "nvmf_tgt_poll_group_001", 00:15:36.589 "admin_qpairs": 0, 00:15:36.589 "io_qpairs": 0, 00:15:36.589 "current_admin_qpairs": 0, 00:15:36.589 "current_io_qpairs": 0, 00:15:36.589 "pending_bdev_io": 0, 00:15:36.589 "completed_nvme_io": 0, 00:15:36.589 "transports": [] 00:15:36.589 }, 00:15:36.589 { 00:15:36.589 "name": "nvmf_tgt_poll_group_002", 00:15:36.589 "admin_qpairs": 0, 00:15:36.589 "io_qpairs": 0, 00:15:36.589 "current_admin_qpairs": 0, 00:15:36.589 "current_io_qpairs": 0, 00:15:36.589 "pending_bdev_io": 0, 00:15:36.589 "completed_nvme_io": 0, 00:15:36.589 "transports": [] 00:15:36.589 }, 00:15:36.589 { 00:15:36.589 "name": "nvmf_tgt_poll_group_003", 00:15:36.589 "admin_qpairs": 0, 00:15:36.589 "io_qpairs": 0, 00:15:36.589 "current_admin_qpairs": 0, 00:15:36.589 "current_io_qpairs": 0, 00:15:36.589 "pending_bdev_io": 0, 00:15:36.590 "completed_nvme_io": 0, 00:15:36.590 "transports": [] 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 }' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.590 [2024-07-24 09:01:14.630960] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:15:36.590 "tick_rate": 2700000000, 00:15:36.590 "poll_groups": [ 00:15:36.590 { 00:15:36.590 "name": "nvmf_tgt_poll_group_000", 00:15:36.590 "admin_qpairs": 0, 00:15:36.590 "io_qpairs": 0, 00:15:36.590 "current_admin_qpairs": 0, 00:15:36.590 "current_io_qpairs": 0, 00:15:36.590 "pending_bdev_io": 0, 00:15:36.590 "completed_nvme_io": 0, 00:15:36.590 "transports": [ 00:15:36.590 { 00:15:36.590 "trtype": "TCP" 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 }, 00:15:36.590 { 00:15:36.590 "name": "nvmf_tgt_poll_group_001", 00:15:36.590 "admin_qpairs": 0, 00:15:36.590 "io_qpairs": 0, 00:15:36.590 "current_admin_qpairs": 0, 00:15:36.590 "current_io_qpairs": 0, 00:15:36.590 "pending_bdev_io": 0, 00:15:36.590 "completed_nvme_io": 0, 00:15:36.590 "transports": [ 00:15:36.590 { 00:15:36.590 "trtype": "TCP" 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 }, 00:15:36.590 { 00:15:36.590 "name": "nvmf_tgt_poll_group_002", 00:15:36.590 "admin_qpairs": 0, 00:15:36.590 "io_qpairs": 0, 00:15:36.590 "current_admin_qpairs": 0, 00:15:36.590 "current_io_qpairs": 0, 00:15:36.590 "pending_bdev_io": 0, 00:15:36.590 "completed_nvme_io": 0, 00:15:36.590 "transports": [ 00:15:36.590 { 00:15:36.590 "trtype": "TCP" 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 }, 00:15:36.590 { 00:15:36.590 "name": "nvmf_tgt_poll_group_003", 00:15:36.590 "admin_qpairs": 0, 00:15:36.590 "io_qpairs": 0, 00:15:36.590 "current_admin_qpairs": 0, 00:15:36.590 "current_io_qpairs": 0, 00:15:36.590 "pending_bdev_io": 0, 00:15:36.590 "completed_nvme_io": 0, 00:15:36.590 "transports": [ 00:15:36.590 { 00:15:36.590 "trtype": "TCP" 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 } 00:15:36.590 ] 00:15:36.590 }' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:36.590 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 Malloc1 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 [2024-07-24 09:01:14.784726] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:15:36.848 [2024-07-24 09:01:14.807228] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:15:36.848 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:36.848 could not add new controller: failed to write to nvme-fabrics device 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.848 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:37.415 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:37.415 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:37.415 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:37.415 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:37.415 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:39.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.950 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:39.951 [2024-07-24 09:01:17.639222] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:15:39.951 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:39.951 could not add new controller: failed to write to nvme-fabrics device 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.951 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:40.519 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:40.519 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:40.519 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:40.519 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:40.519 09:01:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:42.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.424 [2024-07-24 09:01:20.458317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.424 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:42.994 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:42.994 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:42.994 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:42.994 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:42.994 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:45.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 [2024-07-24 09:01:23.189574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.532 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:45.792 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:45.792 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:45.792 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.792 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:45.792 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:47.696 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:47.696 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:47.696 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:47.954 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 [2024-07-24 09:01:25.915130] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.955 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:48.523 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:48.523 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:48.523 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.523 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:48.523 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:50.425 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:50.425 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:50.425 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:50.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 [2024-07-24 09:01:28.681960] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.685 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:51.251 09:01:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:51.251 09:01:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:51.251 09:01:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:51.251 09:01:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:51.251 09:01:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:53.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 [2024-07-24 09:01:31.440352] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.789 09:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:54.048 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:54.048 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:15:54.048 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:15:54.048 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:15:54.048 09:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:56.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 [2024-07-24 09:01:34.257051] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 [2024-07-24 09:01:34.305158] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 [2024-07-24 09:01:34.353318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.584 [2024-07-24 09:01:34.401492] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.584 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 [2024-07-24 09:01:34.449671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:56.585 "tick_rate": 2700000000, 00:15:56.585 "poll_groups": [ 00:15:56.585 { 00:15:56.585 "name": "nvmf_tgt_poll_group_000", 00:15:56.585 "admin_qpairs": 2, 00:15:56.585 "io_qpairs": 84, 00:15:56.585 "current_admin_qpairs": 0, 00:15:56.585 "current_io_qpairs": 0, 00:15:56.585 "pending_bdev_io": 0, 00:15:56.585 "completed_nvme_io": 167, 00:15:56.585 "transports": [ 00:15:56.585 { 00:15:56.585 "trtype": "TCP" 00:15:56.585 } 00:15:56.585 ] 00:15:56.585 }, 00:15:56.585 { 00:15:56.585 "name": "nvmf_tgt_poll_group_001", 00:15:56.585 "admin_qpairs": 2, 00:15:56.585 "io_qpairs": 84, 00:15:56.585 "current_admin_qpairs": 0, 00:15:56.585 "current_io_qpairs": 0, 00:15:56.585 "pending_bdev_io": 0, 00:15:56.585 "completed_nvme_io": 121, 00:15:56.585 "transports": [ 00:15:56.585 { 00:15:56.585 "trtype": "TCP" 00:15:56.585 } 00:15:56.585 ] 00:15:56.585 }, 00:15:56.585 { 00:15:56.585 "name": "nvmf_tgt_poll_group_002", 00:15:56.585 "admin_qpairs": 1, 00:15:56.585 "io_qpairs": 84, 00:15:56.585 "current_admin_qpairs": 0, 00:15:56.585 "current_io_qpairs": 0, 00:15:56.585 "pending_bdev_io": 0, 00:15:56.585 "completed_nvme_io": 199, 00:15:56.585 "transports": [ 00:15:56.585 { 00:15:56.585 "trtype": "TCP" 00:15:56.585 } 00:15:56.585 ] 00:15:56.585 }, 00:15:56.585 { 00:15:56.585 "name": "nvmf_tgt_poll_group_003", 00:15:56.585 "admin_qpairs": 2, 00:15:56.585 "io_qpairs": 84, 00:15:56.585 "current_admin_qpairs": 0, 00:15:56.585 "current_io_qpairs": 0, 00:15:56.585 "pending_bdev_io": 0, 00:15:56.585 "completed_nvme_io": 199, 00:15:56.585 "transports": [ 00:15:56.585 { 00:15:56.585 "trtype": "TCP" 00:15:56.585 } 00:15:56.585 ] 00:15:56.585 } 00:15:56.585 ] 00:15:56.585 }' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:56.585 rmmod nvme_tcp 00:15:56.585 rmmod nvme_fabrics 00:15:56.585 rmmod nvme_keyring 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3745522 ']' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3745522 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3745522 ']' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3745522 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3745522 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3745522' 00:15:56.585 killing process with pid 3745522 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3745522 00:15:56.585 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3745522 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.844 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.388 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:59.388 00:15:59.388 real 0m25.056s 00:15:59.388 user 1m21.339s 00:15:59.388 sys 0m4.040s 00:15:59.388 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:59.389 09:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:59.389 ************************************ 00:15:59.389 END TEST nvmf_rpc 00:15:59.389 ************************************ 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:59.389 ************************************ 00:15:59.389 START TEST nvmf_invalid 00:15:59.389 ************************************ 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:59.389 * Looking for test storage... 00:15:59.389 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:15:59.389 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:01.303 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:01.303 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:01.303 Found net devices under 0000:09:00.0: cvl_0_0 00:16:01.303 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:01.304 Found net devices under 0000:09:00.1: cvl_0_1 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:16:01.304 00:16:01.304 --- 10.0.0.2 ping statistics --- 00:16:01.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.304 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:16:01.304 00:16:01.304 --- 10.0.0.1 ping statistics --- 00:16:01.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.304 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3750020 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3750020 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3750020 ']' 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.304 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.304 [2024-07-24 09:01:39.266419] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:01.304 [2024-07-24 09:01:39.266507] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.304 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.304 [2024-07-24 09:01:39.303543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:01.304 [2024-07-24 09:01:39.336136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.562 [2024-07-24 09:01:39.432690] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.562 [2024-07-24 09:01:39.432750] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.562 [2024-07-24 09:01:39.432766] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.562 [2024-07-24 09:01:39.432779] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.562 [2024-07-24 09:01:39.432791] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.562 [2024-07-24 09:01:39.432879] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.562 [2024-07-24 09:01:39.432934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.562 [2024-07-24 09:01:39.432987] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.562 [2024-07-24 09:01:39.432990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:01.562 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode10828 00:16:01.820 [2024-07-24 09:01:39.874684] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:16:01.820 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:16:01.820 { 00:16:01.820 "nqn": "nqn.2016-06.io.spdk:cnode10828", 00:16:01.820 "tgt_name": "foobar", 00:16:01.820 "method": "nvmf_create_subsystem", 00:16:01.820 "req_id": 1 00:16:01.820 } 00:16:01.820 Got JSON-RPC error response 00:16:01.820 response: 00:16:01.820 { 00:16:01.820 "code": -32603, 00:16:01.820 "message": "Unable to find target foobar" 00:16:01.820 }' 00:16:01.820 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:16:01.820 { 00:16:01.820 "nqn": "nqn.2016-06.io.spdk:cnode10828", 00:16:01.820 "tgt_name": "foobar", 00:16:01.820 "method": "nvmf_create_subsystem", 00:16:01.820 "req_id": 1 00:16:01.820 } 00:16:01.820 Got JSON-RPC error response 00:16:01.820 response: 00:16:01.820 { 00:16:01.820 "code": -32603, 00:16:01.820 "message": "Unable to find target foobar" 00:16:01.820 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:16:01.820 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:16:01.820 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode14451 00:16:02.079 [2024-07-24 09:01:40.183800] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14451: invalid serial number 'SPDKISFASTANDAWESOME' 00:16:02.337 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:16:02.337 { 00:16:02.337 "nqn": "nqn.2016-06.io.spdk:cnode14451", 00:16:02.337 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:02.337 "method": "nvmf_create_subsystem", 00:16:02.337 "req_id": 1 00:16:02.337 } 00:16:02.337 Got JSON-RPC error response 00:16:02.337 response: 00:16:02.337 { 00:16:02.337 "code": -32602, 00:16:02.337 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:02.337 }' 00:16:02.337 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:16:02.337 { 00:16:02.337 "nqn": "nqn.2016-06.io.spdk:cnode14451", 00:16:02.337 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:16:02.337 "method": "nvmf_create_subsystem", 00:16:02.337 "req_id": 1 00:16:02.337 } 00:16:02.337 Got JSON-RPC error response 00:16:02.337 response: 00:16:02.337 { 00:16:02.337 "code": -32602, 00:16:02.337 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:16:02.337 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:02.337 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:16:02.337 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23552 00:16:02.337 [2024-07-24 09:01:40.436564] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23552: invalid model number 'SPDK_Controller' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:16:02.597 { 00:16:02.597 "nqn": "nqn.2016-06.io.spdk:cnode23552", 00:16:02.597 "model_number": "SPDK_Controller\u001f", 00:16:02.597 "method": "nvmf_create_subsystem", 00:16:02.597 "req_id": 1 00:16:02.597 } 00:16:02.597 Got JSON-RPC error response 00:16:02.597 response: 00:16:02.597 { 00:16:02.597 "code": -32602, 00:16:02.597 "message": "Invalid MN SPDK_Controller\u001f" 00:16:02.597 }' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:16:02.597 { 00:16:02.597 "nqn": "nqn.2016-06.io.spdk:cnode23552", 00:16:02.597 "model_number": "SPDK_Controller\u001f", 00:16:02.597 "method": "nvmf_create_subsystem", 00:16:02.597 "req_id": 1 00:16:02.597 } 00:16:02.597 Got JSON-RPC error response 00:16:02.597 response: 00:16:02.597 { 00:16:02.597 "code": -32602, 00:16:02.597 "message": "Invalid MN SPDK_Controller\u001f" 00:16:02.597 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.597 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ n == \- ]] 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo nw@Px=Lx58px3H@sYuAa@ 00:16:02.598 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s nw@Px=Lx58px3H@sYuAa@ nqn.2016-06.io.spdk:cnode7439 00:16:02.857 [2024-07-24 09:01:40.745593] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7439: invalid serial number 'nw@Px=Lx58px3H@sYuAa@' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:16:02.857 { 00:16:02.857 "nqn": "nqn.2016-06.io.spdk:cnode7439", 00:16:02.857 "serial_number": "nw@Px=Lx58px3H@sYuAa@", 00:16:02.857 "method": "nvmf_create_subsystem", 00:16:02.857 "req_id": 1 00:16:02.857 } 00:16:02.857 Got JSON-RPC error response 00:16:02.857 response: 00:16:02.857 { 00:16:02.857 "code": -32602, 00:16:02.857 "message": "Invalid SN nw@Px=Lx58px3H@sYuAa@" 00:16:02.857 }' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:16:02.857 { 00:16:02.857 "nqn": "nqn.2016-06.io.spdk:cnode7439", 00:16:02.857 "serial_number": "nw@Px=Lx58px3H@sYuAa@", 00:16:02.857 "method": "nvmf_create_subsystem", 00:16:02.857 "req_id": 1 00:16:02.857 } 00:16:02.857 Got JSON-RPC error response 00:16:02.857 response: 00:16:02.857 { 00:16:02.857 "code": -32602, 00:16:02.857 "message": "Invalid SN nw@Px=Lx58px3H@sYuAa@" 00:16:02.857 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.857 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ , == \- ]] 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ',F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#' 00:16:02.858 09:01:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ',F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#' nqn.2016-06.io.spdk:cnode6373 00:16:03.115 [2024-07-24 09:01:41.146909] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6373: invalid model number ',F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#' 00:16:03.115 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:16:03.115 { 00:16:03.115 "nqn": "nqn.2016-06.io.spdk:cnode6373", 00:16:03.115 "model_number": ",F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#", 00:16:03.115 "method": "nvmf_create_subsystem", 00:16:03.115 "req_id": 1 00:16:03.115 } 00:16:03.115 Got JSON-RPC error response 00:16:03.115 response: 00:16:03.115 { 00:16:03.115 "code": -32602, 00:16:03.115 "message": "Invalid MN ,F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#" 00:16:03.115 }' 00:16:03.115 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:16:03.115 { 00:16:03.115 "nqn": "nqn.2016-06.io.spdk:cnode6373", 00:16:03.115 "model_number": ",F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#", 00:16:03.115 "method": "nvmf_create_subsystem", 00:16:03.115 "req_id": 1 00:16:03.115 } 00:16:03.115 Got JSON-RPC error response 00:16:03.115 response: 00:16:03.115 { 00:16:03.115 "code": -32602, 00:16:03.115 "message": "Invalid MN ,F1TZ=?e`P>+RxL;BmyU~n{*yz6]Ac]/9CH[%;]i#" 00:16:03.115 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:16:03.115 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:16:03.372 [2024-07-24 09:01:41.403813] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.372 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:16:03.629 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:16:03.629 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:16:03.629 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:16:03.630 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:16:03.630 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:16:03.887 [2024-07-24 09:01:41.921577] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:16:03.887 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:16:03.887 { 00:16:03.887 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:03.887 "listen_address": { 00:16:03.887 "trtype": "tcp", 00:16:03.887 "traddr": "", 00:16:03.887 "trsvcid": "4421" 00:16:03.887 }, 00:16:03.887 "method": "nvmf_subsystem_remove_listener", 00:16:03.887 "req_id": 1 00:16:03.887 } 00:16:03.887 Got JSON-RPC error response 00:16:03.887 response: 00:16:03.887 { 00:16:03.887 "code": -32602, 00:16:03.887 "message": "Invalid parameters" 00:16:03.887 }' 00:16:03.887 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:16:03.887 { 00:16:03.887 "nqn": "nqn.2016-06.io.spdk:cnode", 00:16:03.887 "listen_address": { 00:16:03.887 "trtype": "tcp", 00:16:03.887 "traddr": "", 00:16:03.887 "trsvcid": "4421" 00:16:03.887 }, 00:16:03.887 "method": "nvmf_subsystem_remove_listener", 00:16:03.887 "req_id": 1 00:16:03.887 } 00:16:03.887 Got JSON-RPC error response 00:16:03.887 response: 00:16:03.887 { 00:16:03.887 "code": -32602, 00:16:03.887 "message": "Invalid parameters" 00:16:03.887 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:16:03.887 09:01:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16146 -i 0 00:16:04.145 [2024-07-24 09:01:42.166371] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16146: invalid cntlid range [0-65519] 00:16:04.145 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:16:04.145 { 00:16:04.145 "nqn": "nqn.2016-06.io.spdk:cnode16146", 00:16:04.145 "min_cntlid": 0, 00:16:04.145 "method": "nvmf_create_subsystem", 00:16:04.145 "req_id": 1 00:16:04.145 } 00:16:04.145 Got JSON-RPC error response 00:16:04.145 response: 00:16:04.145 { 00:16:04.145 "code": -32602, 00:16:04.145 "message": "Invalid cntlid range [0-65519]" 00:16:04.145 }' 00:16:04.145 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:16:04.145 { 00:16:04.145 "nqn": "nqn.2016-06.io.spdk:cnode16146", 00:16:04.145 "min_cntlid": 0, 00:16:04.145 "method": "nvmf_create_subsystem", 00:16:04.145 "req_id": 1 00:16:04.145 } 00:16:04.145 Got JSON-RPC error response 00:16:04.145 response: 00:16:04.145 { 00:16:04.145 "code": -32602, 00:16:04.145 "message": "Invalid cntlid range [0-65519]" 00:16:04.145 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:04.145 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23186 -i 65520 00:16:04.404 [2024-07-24 09:01:42.415152] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23186: invalid cntlid range [65520-65519] 00:16:04.404 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:16:04.404 { 00:16:04.404 "nqn": "nqn.2016-06.io.spdk:cnode23186", 00:16:04.404 "min_cntlid": 65520, 00:16:04.404 "method": "nvmf_create_subsystem", 00:16:04.404 "req_id": 1 00:16:04.404 } 00:16:04.404 Got JSON-RPC error response 00:16:04.404 response: 00:16:04.404 { 00:16:04.404 "code": -32602, 00:16:04.404 "message": "Invalid cntlid range [65520-65519]" 00:16:04.404 }' 00:16:04.404 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:16:04.404 { 00:16:04.404 "nqn": "nqn.2016-06.io.spdk:cnode23186", 00:16:04.404 "min_cntlid": 65520, 00:16:04.404 "method": "nvmf_create_subsystem", 00:16:04.404 "req_id": 1 00:16:04.404 } 00:16:04.404 Got JSON-RPC error response 00:16:04.404 response: 00:16:04.404 { 00:16:04.404 "code": -32602, 00:16:04.404 "message": "Invalid cntlid range [65520-65519]" 00:16:04.404 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:04.404 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6747 -I 0 00:16:04.661 [2024-07-24 09:01:42.659947] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6747: invalid cntlid range [1-0] 00:16:04.661 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:16:04.661 { 00:16:04.662 "nqn": "nqn.2016-06.io.spdk:cnode6747", 00:16:04.662 "max_cntlid": 0, 00:16:04.662 "method": "nvmf_create_subsystem", 00:16:04.662 "req_id": 1 00:16:04.662 } 00:16:04.662 Got JSON-RPC error response 00:16:04.662 response: 00:16:04.662 { 00:16:04.662 "code": -32602, 00:16:04.662 "message": "Invalid cntlid range [1-0]" 00:16:04.662 }' 00:16:04.662 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:16:04.662 { 00:16:04.662 "nqn": "nqn.2016-06.io.spdk:cnode6747", 00:16:04.662 "max_cntlid": 0, 00:16:04.662 "method": "nvmf_create_subsystem", 00:16:04.662 "req_id": 1 00:16:04.662 } 00:16:04.662 Got JSON-RPC error response 00:16:04.662 response: 00:16:04.662 { 00:16:04.662 "code": -32602, 00:16:04.662 "message": "Invalid cntlid range [1-0]" 00:16:04.662 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:04.662 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6428 -I 65520 00:16:04.919 [2024-07-24 09:01:42.908795] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6428: invalid cntlid range [1-65520] 00:16:04.919 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:16:04.919 { 00:16:04.919 "nqn": "nqn.2016-06.io.spdk:cnode6428", 00:16:04.919 "max_cntlid": 65520, 00:16:04.919 "method": "nvmf_create_subsystem", 00:16:04.919 "req_id": 1 00:16:04.919 } 00:16:04.919 Got JSON-RPC error response 00:16:04.919 response: 00:16:04.919 { 00:16:04.919 "code": -32602, 00:16:04.919 "message": "Invalid cntlid range [1-65520]" 00:16:04.919 }' 00:16:04.919 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:16:04.919 { 00:16:04.919 "nqn": "nqn.2016-06.io.spdk:cnode6428", 00:16:04.919 "max_cntlid": 65520, 00:16:04.919 "method": "nvmf_create_subsystem", 00:16:04.919 "req_id": 1 00:16:04.919 } 00:16:04.919 Got JSON-RPC error response 00:16:04.919 response: 00:16:04.919 { 00:16:04.919 "code": -32602, 00:16:04.919 "message": "Invalid cntlid range [1-65520]" 00:16:04.919 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:04.919 09:01:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25280 -i 6 -I 5 00:16:05.177 [2024-07-24 09:01:43.157622] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25280: invalid cntlid range [6-5] 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:16:05.177 { 00:16:05.177 "nqn": "nqn.2016-06.io.spdk:cnode25280", 00:16:05.177 "min_cntlid": 6, 00:16:05.177 "max_cntlid": 5, 00:16:05.177 "method": "nvmf_create_subsystem", 00:16:05.177 "req_id": 1 00:16:05.177 } 00:16:05.177 Got JSON-RPC error response 00:16:05.177 response: 00:16:05.177 { 00:16:05.177 "code": -32602, 00:16:05.177 "message": "Invalid cntlid range [6-5]" 00:16:05.177 }' 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:16:05.177 { 00:16:05.177 "nqn": "nqn.2016-06.io.spdk:cnode25280", 00:16:05.177 "min_cntlid": 6, 00:16:05.177 "max_cntlid": 5, 00:16:05.177 "method": "nvmf_create_subsystem", 00:16:05.177 "req_id": 1 00:16:05.177 } 00:16:05.177 Got JSON-RPC error response 00:16:05.177 response: 00:16:05.177 { 00:16:05.177 "code": -32602, 00:16:05.177 "message": "Invalid cntlid range [6-5]" 00:16:05.177 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:16:05.177 { 00:16:05.177 "name": "foobar", 00:16:05.177 "method": "nvmf_delete_target", 00:16:05.177 "req_id": 1 00:16:05.177 } 00:16:05.177 Got JSON-RPC error response 00:16:05.177 response: 00:16:05.177 { 00:16:05.177 "code": -32602, 00:16:05.177 "message": "The specified target doesn'\''t exist, cannot delete it." 00:16:05.177 }' 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:16:05.177 { 00:16:05.177 "name": "foobar", 00:16:05.177 "method": "nvmf_delete_target", 00:16:05.177 "req_id": 1 00:16:05.177 } 00:16:05.177 Got JSON-RPC error response 00:16:05.177 response: 00:16:05.177 { 00:16:05.177 "code": -32602, 00:16:05.177 "message": "The specified target doesn't exist, cannot delete it." 00:16:05.177 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:05.177 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:05.177 rmmod nvme_tcp 00:16:05.434 rmmod nvme_fabrics 00:16:05.434 rmmod nvme_keyring 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3750020 ']' 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3750020 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3750020 ']' 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3750020 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3750020 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3750020' 00:16:05.434 killing process with pid 3750020 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3750020 00:16:05.434 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3750020 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:05.692 09:01:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.597 00:16:07.597 real 0m8.594s 00:16:07.597 user 0m20.206s 00:16:07.597 sys 0m2.379s 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:16:07.597 ************************************ 00:16:07.597 END TEST nvmf_invalid 00:16:07.597 ************************************ 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:07.597 ************************************ 00:16:07.597 START TEST nvmf_connect_stress 00:16:07.597 ************************************ 00:16:07.597 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:16:07.856 * Looking for test storage... 00:16:07.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.856 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.857 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:09.761 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:09.761 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:09.761 Found net devices under 0000:09:00.0: cvl_0_0 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:09.761 Found net devices under 0000:09:00.1: cvl_0_1 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.761 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.762 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:10.020 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:10.020 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:10.020 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:10.020 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:10.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:10.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:16:10.021 00:16:10.021 --- 10.0.0.2 ping statistics --- 00:16:10.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.021 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:10.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:10.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:16:10.021 00:16:10.021 --- 10.0.0.1 ping statistics --- 00:16:10.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:10.021 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:10.021 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3752646 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3752646 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3752646 ']' 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:10.021 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.021 [2024-07-24 09:01:48.047679] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:10.021 [2024-07-24 09:01:48.047751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.021 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.021 [2024-07-24 09:01:48.083173] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:10.021 [2024-07-24 09:01:48.115038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.280 [2024-07-24 09:01:48.209044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.280 [2024-07-24 09:01:48.209111] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.280 [2024-07-24 09:01:48.209147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:10.280 [2024-07-24 09:01:48.209171] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:10.280 [2024-07-24 09:01:48.209190] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.280 [2024-07-24 09:01:48.209270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.280 [2024-07-24 09:01:48.209326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:10.280 [2024-07-24 09:01:48.209332] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.280 [2024-07-24 09:01:48.353603] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.280 [2024-07-24 09:01:48.384417] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.280 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.541 NULL1 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3752789 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.541 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.542 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:10.803 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.803 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:10.803 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:10.803 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.803 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.063 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.063 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:11.063 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.063 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.063 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.322 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.323 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:11.323 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.323 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.323 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:11.891 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.891 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:11.891 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:11.891 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.891 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.152 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.152 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:12.152 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.152 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.152 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.411 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.411 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:12.411 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.411 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.411 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.670 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.670 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:12.670 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.670 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.670 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:12.929 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:12.929 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:12.929 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:12.929 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:12.929 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.495 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.495 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:13.495 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.495 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.495 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:13.755 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.755 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:13.755 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:13.755 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.755 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.016 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.016 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:14.016 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.016 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.016 09:01:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.274 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.274 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:14.274 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.274 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.274 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:14.531 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.532 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:14.532 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:14.532 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.532 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.100 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.100 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:15.100 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.100 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.100 09:01:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.380 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.380 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:15.380 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.380 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.380 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.649 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.649 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:15.649 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.649 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.649 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:15.909 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.909 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:15.909 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:15.909 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.909 09:01:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.167 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.167 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:16.167 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.167 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.167 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.737 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.737 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:16.737 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.737 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.737 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:16.996 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.996 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:16.996 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:16.996 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.996 09:01:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.256 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.256 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:17.256 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.256 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.256 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.514 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.514 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:17.514 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.514 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.514 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:17.772 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.772 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:17.772 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:17.772 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.772 09:01:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.340 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.340 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:18.340 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.341 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.341 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.601 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.601 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:18.601 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.601 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.601 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:18.880 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.880 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:18.880 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:18.880 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.880 09:01:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.145 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.145 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:19.145 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.145 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.145 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.403 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.403 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:19.403 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.403 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.403 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:19.662 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:19.662 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:19.662 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:19.663 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:19.663 09:01:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.232 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.232 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:20.232 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.232 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.232 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.490 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.491 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:20.491 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:20.491 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.491 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:20.491 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:20.749 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.749 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3752789 00:16:20.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3752789) - No such process 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3752789 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.750 rmmod nvme_tcp 00:16:20.750 rmmod nvme_fabrics 00:16:20.750 rmmod nvme_keyring 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3752646 ']' 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3752646 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3752646 ']' 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3752646 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3752646 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3752646' 00:16:20.750 killing process with pid 3752646 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3752646 00:16:20.750 09:01:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3752646 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:21.008 09:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.549 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.549 00:16:23.549 real 0m15.388s 00:16:23.549 user 0m38.132s 00:16:23.549 sys 0m6.176s 00:16:23.549 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.549 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:16:23.549 ************************************ 00:16:23.549 END TEST nvmf_connect_stress 00:16:23.549 ************************************ 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.550 ************************************ 00:16:23.550 START TEST nvmf_fused_ordering 00:16:23.550 ************************************ 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:16:23.550 * Looking for test storage... 00:16:23.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.550 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.456 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:25.456 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:25.457 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.457 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:25.458 Found net devices under 0000:09:00.0: cvl_0_0 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:25.458 Found net devices under 0000:09:00.1: cvl_0_1 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.458 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:16:25.459 00:16:25.459 --- 10.0.0.2 ping statistics --- 00:16:25.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.459 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:25.459 00:16:25.459 --- 10.0.0.1 ping statistics --- 00:16:25.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.459 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.459 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3755925 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3755925 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3755925 ']' 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.460 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.460 [2024-07-24 09:02:03.423425] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:25.460 [2024-07-24 09:02:03.423509] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.460 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.460 [2024-07-24 09:02:03.459435] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:25.460 [2024-07-24 09:02:03.485959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.724 [2024-07-24 09:02:03.571986] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.724 [2024-07-24 09:02:03.572046] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.724 [2024-07-24 09:02:03.572067] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.724 [2024-07-24 09:02:03.572098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.724 [2024-07-24 09:02:03.572125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.724 [2024-07-24 09:02:03.572161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 [2024-07-24 09:02:03.716021] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 [2024-07-24 09:02:03.732245] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 NULL1 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.724 09:02:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:25.724 [2024-07-24 09:02:03.778918] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:25.724 [2024-07-24 09:02:03.778961] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3755951 ] 00:16:25.724 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.724 [2024-07-24 09:02:03.815852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:26.294 Attached to nqn.2016-06.io.spdk:cnode1 00:16:26.294 Namespace ID: 1 size: 1GB 00:16:26.294 fused_ordering(0) 00:16:26.294 fused_ordering(1) 00:16:26.294 fused_ordering(2) 00:16:26.294 fused_ordering(3) 00:16:26.294 fused_ordering(4) 00:16:26.294 fused_ordering(5) 00:16:26.294 fused_ordering(6) 00:16:26.294 fused_ordering(7) 00:16:26.294 fused_ordering(8) 00:16:26.294 fused_ordering(9) 00:16:26.294 fused_ordering(10) 00:16:26.294 fused_ordering(11) 00:16:26.294 fused_ordering(12) 00:16:26.294 fused_ordering(13) 00:16:26.294 fused_ordering(14) 00:16:26.294 fused_ordering(15) 00:16:26.294 fused_ordering(16) 00:16:26.294 fused_ordering(17) 00:16:26.294 fused_ordering(18) 00:16:26.294 fused_ordering(19) 00:16:26.294 fused_ordering(20) 00:16:26.294 fused_ordering(21) 00:16:26.294 fused_ordering(22) 00:16:26.294 fused_ordering(23) 00:16:26.294 fused_ordering(24) 00:16:26.294 fused_ordering(25) 00:16:26.294 fused_ordering(26) 00:16:26.294 fused_ordering(27) 00:16:26.294 fused_ordering(28) 00:16:26.294 fused_ordering(29) 00:16:26.294 fused_ordering(30) 00:16:26.294 fused_ordering(31) 00:16:26.294 fused_ordering(32) 00:16:26.294 fused_ordering(33) 00:16:26.294 fused_ordering(34) 00:16:26.294 fused_ordering(35) 00:16:26.294 fused_ordering(36) 00:16:26.294 fused_ordering(37) 00:16:26.294 fused_ordering(38) 00:16:26.294 fused_ordering(39) 00:16:26.294 fused_ordering(40) 00:16:26.294 fused_ordering(41) 00:16:26.294 fused_ordering(42) 00:16:26.294 fused_ordering(43) 00:16:26.294 fused_ordering(44) 00:16:26.294 fused_ordering(45) 00:16:26.294 fused_ordering(46) 00:16:26.294 fused_ordering(47) 00:16:26.294 fused_ordering(48) 00:16:26.294 fused_ordering(49) 00:16:26.294 fused_ordering(50) 00:16:26.294 fused_ordering(51) 00:16:26.294 fused_ordering(52) 00:16:26.294 fused_ordering(53) 00:16:26.294 fused_ordering(54) 00:16:26.294 fused_ordering(55) 00:16:26.294 fused_ordering(56) 00:16:26.294 fused_ordering(57) 00:16:26.294 fused_ordering(58) 00:16:26.294 fused_ordering(59) 00:16:26.294 fused_ordering(60) 00:16:26.294 fused_ordering(61) 00:16:26.294 fused_ordering(62) 00:16:26.294 fused_ordering(63) 00:16:26.294 fused_ordering(64) 00:16:26.294 fused_ordering(65) 00:16:26.294 fused_ordering(66) 00:16:26.294 fused_ordering(67) 00:16:26.294 fused_ordering(68) 00:16:26.294 fused_ordering(69) 00:16:26.294 fused_ordering(70) 00:16:26.294 fused_ordering(71) 00:16:26.294 fused_ordering(72) 00:16:26.294 fused_ordering(73) 00:16:26.294 fused_ordering(74) 00:16:26.294 fused_ordering(75) 00:16:26.294 fused_ordering(76) 00:16:26.294 fused_ordering(77) 00:16:26.294 fused_ordering(78) 00:16:26.294 fused_ordering(79) 00:16:26.294 fused_ordering(80) 00:16:26.294 fused_ordering(81) 00:16:26.294 fused_ordering(82) 00:16:26.294 fused_ordering(83) 00:16:26.294 fused_ordering(84) 00:16:26.294 fused_ordering(85) 00:16:26.294 fused_ordering(86) 00:16:26.294 fused_ordering(87) 00:16:26.294 fused_ordering(88) 00:16:26.294 fused_ordering(89) 00:16:26.294 fused_ordering(90) 00:16:26.294 fused_ordering(91) 00:16:26.294 fused_ordering(92) 00:16:26.294 fused_ordering(93) 00:16:26.294 fused_ordering(94) 00:16:26.294 fused_ordering(95) 00:16:26.294 fused_ordering(96) 00:16:26.294 fused_ordering(97) 00:16:26.294 fused_ordering(98) 00:16:26.294 fused_ordering(99) 00:16:26.294 fused_ordering(100) 00:16:26.294 fused_ordering(101) 00:16:26.294 fused_ordering(102) 00:16:26.294 fused_ordering(103) 00:16:26.294 fused_ordering(104) 00:16:26.294 fused_ordering(105) 00:16:26.294 fused_ordering(106) 00:16:26.294 fused_ordering(107) 00:16:26.294 fused_ordering(108) 00:16:26.294 fused_ordering(109) 00:16:26.294 fused_ordering(110) 00:16:26.294 fused_ordering(111) 00:16:26.294 fused_ordering(112) 00:16:26.294 fused_ordering(113) 00:16:26.294 fused_ordering(114) 00:16:26.294 fused_ordering(115) 00:16:26.294 fused_ordering(116) 00:16:26.294 fused_ordering(117) 00:16:26.294 fused_ordering(118) 00:16:26.295 fused_ordering(119) 00:16:26.295 fused_ordering(120) 00:16:26.295 fused_ordering(121) 00:16:26.295 fused_ordering(122) 00:16:26.295 fused_ordering(123) 00:16:26.295 fused_ordering(124) 00:16:26.295 fused_ordering(125) 00:16:26.295 fused_ordering(126) 00:16:26.295 fused_ordering(127) 00:16:26.295 fused_ordering(128) 00:16:26.295 fused_ordering(129) 00:16:26.295 fused_ordering(130) 00:16:26.295 fused_ordering(131) 00:16:26.295 fused_ordering(132) 00:16:26.295 fused_ordering(133) 00:16:26.295 fused_ordering(134) 00:16:26.295 fused_ordering(135) 00:16:26.295 fused_ordering(136) 00:16:26.295 fused_ordering(137) 00:16:26.295 fused_ordering(138) 00:16:26.295 fused_ordering(139) 00:16:26.295 fused_ordering(140) 00:16:26.295 fused_ordering(141) 00:16:26.295 fused_ordering(142) 00:16:26.295 fused_ordering(143) 00:16:26.295 fused_ordering(144) 00:16:26.295 fused_ordering(145) 00:16:26.295 fused_ordering(146) 00:16:26.295 fused_ordering(147) 00:16:26.295 fused_ordering(148) 00:16:26.295 fused_ordering(149) 00:16:26.295 fused_ordering(150) 00:16:26.295 fused_ordering(151) 00:16:26.295 fused_ordering(152) 00:16:26.295 fused_ordering(153) 00:16:26.295 fused_ordering(154) 00:16:26.295 fused_ordering(155) 00:16:26.295 fused_ordering(156) 00:16:26.295 fused_ordering(157) 00:16:26.295 fused_ordering(158) 00:16:26.295 fused_ordering(159) 00:16:26.295 fused_ordering(160) 00:16:26.295 fused_ordering(161) 00:16:26.295 fused_ordering(162) 00:16:26.295 fused_ordering(163) 00:16:26.295 fused_ordering(164) 00:16:26.295 fused_ordering(165) 00:16:26.295 fused_ordering(166) 00:16:26.295 fused_ordering(167) 00:16:26.295 fused_ordering(168) 00:16:26.295 fused_ordering(169) 00:16:26.295 fused_ordering(170) 00:16:26.295 fused_ordering(171) 00:16:26.295 fused_ordering(172) 00:16:26.295 fused_ordering(173) 00:16:26.295 fused_ordering(174) 00:16:26.295 fused_ordering(175) 00:16:26.295 fused_ordering(176) 00:16:26.295 fused_ordering(177) 00:16:26.295 fused_ordering(178) 00:16:26.295 fused_ordering(179) 00:16:26.295 fused_ordering(180) 00:16:26.295 fused_ordering(181) 00:16:26.295 fused_ordering(182) 00:16:26.295 fused_ordering(183) 00:16:26.295 fused_ordering(184) 00:16:26.295 fused_ordering(185) 00:16:26.295 fused_ordering(186) 00:16:26.295 fused_ordering(187) 00:16:26.295 fused_ordering(188) 00:16:26.295 fused_ordering(189) 00:16:26.295 fused_ordering(190) 00:16:26.295 fused_ordering(191) 00:16:26.295 fused_ordering(192) 00:16:26.295 fused_ordering(193) 00:16:26.295 fused_ordering(194) 00:16:26.295 fused_ordering(195) 00:16:26.295 fused_ordering(196) 00:16:26.295 fused_ordering(197) 00:16:26.295 fused_ordering(198) 00:16:26.295 fused_ordering(199) 00:16:26.295 fused_ordering(200) 00:16:26.295 fused_ordering(201) 00:16:26.295 fused_ordering(202) 00:16:26.295 fused_ordering(203) 00:16:26.295 fused_ordering(204) 00:16:26.295 fused_ordering(205) 00:16:26.556 fused_ordering(206) 00:16:26.556 fused_ordering(207) 00:16:26.556 fused_ordering(208) 00:16:26.556 fused_ordering(209) 00:16:26.556 fused_ordering(210) 00:16:26.556 fused_ordering(211) 00:16:26.556 fused_ordering(212) 00:16:26.556 fused_ordering(213) 00:16:26.556 fused_ordering(214) 00:16:26.556 fused_ordering(215) 00:16:26.556 fused_ordering(216) 00:16:26.556 fused_ordering(217) 00:16:26.556 fused_ordering(218) 00:16:26.556 fused_ordering(219) 00:16:26.556 fused_ordering(220) 00:16:26.556 fused_ordering(221) 00:16:26.556 fused_ordering(222) 00:16:26.556 fused_ordering(223) 00:16:26.556 fused_ordering(224) 00:16:26.556 fused_ordering(225) 00:16:26.556 fused_ordering(226) 00:16:26.556 fused_ordering(227) 00:16:26.556 fused_ordering(228) 00:16:26.556 fused_ordering(229) 00:16:26.556 fused_ordering(230) 00:16:26.556 fused_ordering(231) 00:16:26.556 fused_ordering(232) 00:16:26.556 fused_ordering(233) 00:16:26.556 fused_ordering(234) 00:16:26.556 fused_ordering(235) 00:16:26.556 fused_ordering(236) 00:16:26.556 fused_ordering(237) 00:16:26.556 fused_ordering(238) 00:16:26.556 fused_ordering(239) 00:16:26.556 fused_ordering(240) 00:16:26.556 fused_ordering(241) 00:16:26.556 fused_ordering(242) 00:16:26.556 fused_ordering(243) 00:16:26.556 fused_ordering(244) 00:16:26.556 fused_ordering(245) 00:16:26.556 fused_ordering(246) 00:16:26.556 fused_ordering(247) 00:16:26.556 fused_ordering(248) 00:16:26.556 fused_ordering(249) 00:16:26.556 fused_ordering(250) 00:16:26.556 fused_ordering(251) 00:16:26.556 fused_ordering(252) 00:16:26.556 fused_ordering(253) 00:16:26.556 fused_ordering(254) 00:16:26.556 fused_ordering(255) 00:16:26.556 fused_ordering(256) 00:16:26.556 fused_ordering(257) 00:16:26.556 fused_ordering(258) 00:16:26.556 fused_ordering(259) 00:16:26.556 fused_ordering(260) 00:16:26.556 fused_ordering(261) 00:16:26.556 fused_ordering(262) 00:16:26.556 fused_ordering(263) 00:16:26.556 fused_ordering(264) 00:16:26.556 fused_ordering(265) 00:16:26.556 fused_ordering(266) 00:16:26.556 fused_ordering(267) 00:16:26.556 fused_ordering(268) 00:16:26.556 fused_ordering(269) 00:16:26.556 fused_ordering(270) 00:16:26.556 fused_ordering(271) 00:16:26.556 fused_ordering(272) 00:16:26.556 fused_ordering(273) 00:16:26.556 fused_ordering(274) 00:16:26.556 fused_ordering(275) 00:16:26.556 fused_ordering(276) 00:16:26.556 fused_ordering(277) 00:16:26.556 fused_ordering(278) 00:16:26.556 fused_ordering(279) 00:16:26.556 fused_ordering(280) 00:16:26.556 fused_ordering(281) 00:16:26.556 fused_ordering(282) 00:16:26.556 fused_ordering(283) 00:16:26.556 fused_ordering(284) 00:16:26.556 fused_ordering(285) 00:16:26.556 fused_ordering(286) 00:16:26.556 fused_ordering(287) 00:16:26.556 fused_ordering(288) 00:16:26.556 fused_ordering(289) 00:16:26.556 fused_ordering(290) 00:16:26.556 fused_ordering(291) 00:16:26.556 fused_ordering(292) 00:16:26.556 fused_ordering(293) 00:16:26.556 fused_ordering(294) 00:16:26.556 fused_ordering(295) 00:16:26.556 fused_ordering(296) 00:16:26.556 fused_ordering(297) 00:16:26.556 fused_ordering(298) 00:16:26.556 fused_ordering(299) 00:16:26.556 fused_ordering(300) 00:16:26.556 fused_ordering(301) 00:16:26.556 fused_ordering(302) 00:16:26.556 fused_ordering(303) 00:16:26.556 fused_ordering(304) 00:16:26.556 fused_ordering(305) 00:16:26.556 fused_ordering(306) 00:16:26.556 fused_ordering(307) 00:16:26.556 fused_ordering(308) 00:16:26.556 fused_ordering(309) 00:16:26.556 fused_ordering(310) 00:16:26.556 fused_ordering(311) 00:16:26.556 fused_ordering(312) 00:16:26.556 fused_ordering(313) 00:16:26.556 fused_ordering(314) 00:16:26.556 fused_ordering(315) 00:16:26.556 fused_ordering(316) 00:16:26.556 fused_ordering(317) 00:16:26.556 fused_ordering(318) 00:16:26.556 fused_ordering(319) 00:16:26.556 fused_ordering(320) 00:16:26.556 fused_ordering(321) 00:16:26.556 fused_ordering(322) 00:16:26.556 fused_ordering(323) 00:16:26.556 fused_ordering(324) 00:16:26.556 fused_ordering(325) 00:16:26.556 fused_ordering(326) 00:16:26.556 fused_ordering(327) 00:16:26.556 fused_ordering(328) 00:16:26.556 fused_ordering(329) 00:16:26.556 fused_ordering(330) 00:16:26.556 fused_ordering(331) 00:16:26.556 fused_ordering(332) 00:16:26.556 fused_ordering(333) 00:16:26.556 fused_ordering(334) 00:16:26.556 fused_ordering(335) 00:16:26.556 fused_ordering(336) 00:16:26.556 fused_ordering(337) 00:16:26.556 fused_ordering(338) 00:16:26.556 fused_ordering(339) 00:16:26.556 fused_ordering(340) 00:16:26.556 fused_ordering(341) 00:16:26.556 fused_ordering(342) 00:16:26.556 fused_ordering(343) 00:16:26.556 fused_ordering(344) 00:16:26.556 fused_ordering(345) 00:16:26.556 fused_ordering(346) 00:16:26.556 fused_ordering(347) 00:16:26.556 fused_ordering(348) 00:16:26.556 fused_ordering(349) 00:16:26.556 fused_ordering(350) 00:16:26.556 fused_ordering(351) 00:16:26.556 fused_ordering(352) 00:16:26.556 fused_ordering(353) 00:16:26.556 fused_ordering(354) 00:16:26.556 fused_ordering(355) 00:16:26.556 fused_ordering(356) 00:16:26.556 fused_ordering(357) 00:16:26.556 fused_ordering(358) 00:16:26.556 fused_ordering(359) 00:16:26.556 fused_ordering(360) 00:16:26.556 fused_ordering(361) 00:16:26.556 fused_ordering(362) 00:16:26.556 fused_ordering(363) 00:16:26.556 fused_ordering(364) 00:16:26.556 fused_ordering(365) 00:16:26.556 fused_ordering(366) 00:16:26.556 fused_ordering(367) 00:16:26.556 fused_ordering(368) 00:16:26.556 fused_ordering(369) 00:16:26.556 fused_ordering(370) 00:16:26.556 fused_ordering(371) 00:16:26.556 fused_ordering(372) 00:16:26.556 fused_ordering(373) 00:16:26.556 fused_ordering(374) 00:16:26.556 fused_ordering(375) 00:16:26.556 fused_ordering(376) 00:16:26.556 fused_ordering(377) 00:16:26.556 fused_ordering(378) 00:16:26.556 fused_ordering(379) 00:16:26.556 fused_ordering(380) 00:16:26.556 fused_ordering(381) 00:16:26.556 fused_ordering(382) 00:16:26.556 fused_ordering(383) 00:16:26.556 fused_ordering(384) 00:16:26.556 fused_ordering(385) 00:16:26.556 fused_ordering(386) 00:16:26.556 fused_ordering(387) 00:16:26.556 fused_ordering(388) 00:16:26.556 fused_ordering(389) 00:16:26.556 fused_ordering(390) 00:16:26.556 fused_ordering(391) 00:16:26.556 fused_ordering(392) 00:16:26.556 fused_ordering(393) 00:16:26.556 fused_ordering(394) 00:16:26.556 fused_ordering(395) 00:16:26.556 fused_ordering(396) 00:16:26.556 fused_ordering(397) 00:16:26.556 fused_ordering(398) 00:16:26.556 fused_ordering(399) 00:16:26.556 fused_ordering(400) 00:16:26.556 fused_ordering(401) 00:16:26.556 fused_ordering(402) 00:16:26.556 fused_ordering(403) 00:16:26.556 fused_ordering(404) 00:16:26.556 fused_ordering(405) 00:16:26.556 fused_ordering(406) 00:16:26.556 fused_ordering(407) 00:16:26.556 fused_ordering(408) 00:16:26.556 fused_ordering(409) 00:16:26.556 fused_ordering(410) 00:16:27.126 fused_ordering(411) 00:16:27.126 fused_ordering(412) 00:16:27.126 fused_ordering(413) 00:16:27.126 fused_ordering(414) 00:16:27.126 fused_ordering(415) 00:16:27.126 fused_ordering(416) 00:16:27.126 fused_ordering(417) 00:16:27.126 fused_ordering(418) 00:16:27.126 fused_ordering(419) 00:16:27.126 fused_ordering(420) 00:16:27.126 fused_ordering(421) 00:16:27.126 fused_ordering(422) 00:16:27.126 fused_ordering(423) 00:16:27.126 fused_ordering(424) 00:16:27.126 fused_ordering(425) 00:16:27.126 fused_ordering(426) 00:16:27.126 fused_ordering(427) 00:16:27.126 fused_ordering(428) 00:16:27.126 fused_ordering(429) 00:16:27.126 fused_ordering(430) 00:16:27.126 fused_ordering(431) 00:16:27.126 fused_ordering(432) 00:16:27.126 fused_ordering(433) 00:16:27.126 fused_ordering(434) 00:16:27.126 fused_ordering(435) 00:16:27.126 fused_ordering(436) 00:16:27.126 fused_ordering(437) 00:16:27.126 fused_ordering(438) 00:16:27.126 fused_ordering(439) 00:16:27.126 fused_ordering(440) 00:16:27.126 fused_ordering(441) 00:16:27.126 fused_ordering(442) 00:16:27.126 fused_ordering(443) 00:16:27.126 fused_ordering(444) 00:16:27.126 fused_ordering(445) 00:16:27.126 fused_ordering(446) 00:16:27.126 fused_ordering(447) 00:16:27.126 fused_ordering(448) 00:16:27.126 fused_ordering(449) 00:16:27.126 fused_ordering(450) 00:16:27.126 fused_ordering(451) 00:16:27.126 fused_ordering(452) 00:16:27.126 fused_ordering(453) 00:16:27.126 fused_ordering(454) 00:16:27.126 fused_ordering(455) 00:16:27.126 fused_ordering(456) 00:16:27.126 fused_ordering(457) 00:16:27.126 fused_ordering(458) 00:16:27.126 fused_ordering(459) 00:16:27.126 fused_ordering(460) 00:16:27.126 fused_ordering(461) 00:16:27.126 fused_ordering(462) 00:16:27.126 fused_ordering(463) 00:16:27.126 fused_ordering(464) 00:16:27.126 fused_ordering(465) 00:16:27.126 fused_ordering(466) 00:16:27.126 fused_ordering(467) 00:16:27.126 fused_ordering(468) 00:16:27.126 fused_ordering(469) 00:16:27.126 fused_ordering(470) 00:16:27.126 fused_ordering(471) 00:16:27.126 fused_ordering(472) 00:16:27.126 fused_ordering(473) 00:16:27.126 fused_ordering(474) 00:16:27.126 fused_ordering(475) 00:16:27.126 fused_ordering(476) 00:16:27.126 fused_ordering(477) 00:16:27.126 fused_ordering(478) 00:16:27.126 fused_ordering(479) 00:16:27.126 fused_ordering(480) 00:16:27.126 fused_ordering(481) 00:16:27.126 fused_ordering(482) 00:16:27.126 fused_ordering(483) 00:16:27.126 fused_ordering(484) 00:16:27.126 fused_ordering(485) 00:16:27.126 fused_ordering(486) 00:16:27.126 fused_ordering(487) 00:16:27.126 fused_ordering(488) 00:16:27.126 fused_ordering(489) 00:16:27.126 fused_ordering(490) 00:16:27.126 fused_ordering(491) 00:16:27.126 fused_ordering(492) 00:16:27.126 fused_ordering(493) 00:16:27.126 fused_ordering(494) 00:16:27.126 fused_ordering(495) 00:16:27.126 fused_ordering(496) 00:16:27.126 fused_ordering(497) 00:16:27.126 fused_ordering(498) 00:16:27.126 fused_ordering(499) 00:16:27.126 fused_ordering(500) 00:16:27.126 fused_ordering(501) 00:16:27.126 fused_ordering(502) 00:16:27.126 fused_ordering(503) 00:16:27.126 fused_ordering(504) 00:16:27.126 fused_ordering(505) 00:16:27.126 fused_ordering(506) 00:16:27.126 fused_ordering(507) 00:16:27.126 fused_ordering(508) 00:16:27.126 fused_ordering(509) 00:16:27.126 fused_ordering(510) 00:16:27.126 fused_ordering(511) 00:16:27.126 fused_ordering(512) 00:16:27.126 fused_ordering(513) 00:16:27.126 fused_ordering(514) 00:16:27.126 fused_ordering(515) 00:16:27.126 fused_ordering(516) 00:16:27.126 fused_ordering(517) 00:16:27.126 fused_ordering(518) 00:16:27.126 fused_ordering(519) 00:16:27.126 fused_ordering(520) 00:16:27.126 fused_ordering(521) 00:16:27.126 fused_ordering(522) 00:16:27.126 fused_ordering(523) 00:16:27.126 fused_ordering(524) 00:16:27.126 fused_ordering(525) 00:16:27.126 fused_ordering(526) 00:16:27.126 fused_ordering(527) 00:16:27.126 fused_ordering(528) 00:16:27.126 fused_ordering(529) 00:16:27.126 fused_ordering(530) 00:16:27.126 fused_ordering(531) 00:16:27.126 fused_ordering(532) 00:16:27.126 fused_ordering(533) 00:16:27.126 fused_ordering(534) 00:16:27.126 fused_ordering(535) 00:16:27.126 fused_ordering(536) 00:16:27.126 fused_ordering(537) 00:16:27.126 fused_ordering(538) 00:16:27.126 fused_ordering(539) 00:16:27.126 fused_ordering(540) 00:16:27.126 fused_ordering(541) 00:16:27.126 fused_ordering(542) 00:16:27.126 fused_ordering(543) 00:16:27.126 fused_ordering(544) 00:16:27.126 fused_ordering(545) 00:16:27.126 fused_ordering(546) 00:16:27.126 fused_ordering(547) 00:16:27.126 fused_ordering(548) 00:16:27.126 fused_ordering(549) 00:16:27.126 fused_ordering(550) 00:16:27.126 fused_ordering(551) 00:16:27.126 fused_ordering(552) 00:16:27.126 fused_ordering(553) 00:16:27.126 fused_ordering(554) 00:16:27.126 fused_ordering(555) 00:16:27.126 fused_ordering(556) 00:16:27.126 fused_ordering(557) 00:16:27.126 fused_ordering(558) 00:16:27.126 fused_ordering(559) 00:16:27.126 fused_ordering(560) 00:16:27.126 fused_ordering(561) 00:16:27.126 fused_ordering(562) 00:16:27.126 fused_ordering(563) 00:16:27.126 fused_ordering(564) 00:16:27.126 fused_ordering(565) 00:16:27.126 fused_ordering(566) 00:16:27.126 fused_ordering(567) 00:16:27.126 fused_ordering(568) 00:16:27.126 fused_ordering(569) 00:16:27.126 fused_ordering(570) 00:16:27.126 fused_ordering(571) 00:16:27.126 fused_ordering(572) 00:16:27.126 fused_ordering(573) 00:16:27.126 fused_ordering(574) 00:16:27.126 fused_ordering(575) 00:16:27.126 fused_ordering(576) 00:16:27.126 fused_ordering(577) 00:16:27.126 fused_ordering(578) 00:16:27.126 fused_ordering(579) 00:16:27.126 fused_ordering(580) 00:16:27.126 fused_ordering(581) 00:16:27.126 fused_ordering(582) 00:16:27.126 fused_ordering(583) 00:16:27.126 fused_ordering(584) 00:16:27.126 fused_ordering(585) 00:16:27.126 fused_ordering(586) 00:16:27.126 fused_ordering(587) 00:16:27.126 fused_ordering(588) 00:16:27.126 fused_ordering(589) 00:16:27.126 fused_ordering(590) 00:16:27.126 fused_ordering(591) 00:16:27.126 fused_ordering(592) 00:16:27.126 fused_ordering(593) 00:16:27.126 fused_ordering(594) 00:16:27.126 fused_ordering(595) 00:16:27.126 fused_ordering(596) 00:16:27.126 fused_ordering(597) 00:16:27.126 fused_ordering(598) 00:16:27.126 fused_ordering(599) 00:16:27.126 fused_ordering(600) 00:16:27.126 fused_ordering(601) 00:16:27.126 fused_ordering(602) 00:16:27.126 fused_ordering(603) 00:16:27.126 fused_ordering(604) 00:16:27.126 fused_ordering(605) 00:16:27.126 fused_ordering(606) 00:16:27.126 fused_ordering(607) 00:16:27.126 fused_ordering(608) 00:16:27.126 fused_ordering(609) 00:16:27.127 fused_ordering(610) 00:16:27.127 fused_ordering(611) 00:16:27.127 fused_ordering(612) 00:16:27.127 fused_ordering(613) 00:16:27.127 fused_ordering(614) 00:16:27.127 fused_ordering(615) 00:16:27.695 fused_ordering(616) 00:16:27.695 fused_ordering(617) 00:16:27.695 fused_ordering(618) 00:16:27.695 fused_ordering(619) 00:16:27.695 fused_ordering(620) 00:16:27.695 fused_ordering(621) 00:16:27.695 fused_ordering(622) 00:16:27.695 fused_ordering(623) 00:16:27.695 fused_ordering(624) 00:16:27.695 fused_ordering(625) 00:16:27.695 fused_ordering(626) 00:16:27.695 fused_ordering(627) 00:16:27.695 fused_ordering(628) 00:16:27.695 fused_ordering(629) 00:16:27.695 fused_ordering(630) 00:16:27.695 fused_ordering(631) 00:16:27.695 fused_ordering(632) 00:16:27.695 fused_ordering(633) 00:16:27.695 fused_ordering(634) 00:16:27.695 fused_ordering(635) 00:16:27.695 fused_ordering(636) 00:16:27.695 fused_ordering(637) 00:16:27.695 fused_ordering(638) 00:16:27.695 fused_ordering(639) 00:16:27.695 fused_ordering(640) 00:16:27.695 fused_ordering(641) 00:16:27.695 fused_ordering(642) 00:16:27.695 fused_ordering(643) 00:16:27.695 fused_ordering(644) 00:16:27.695 fused_ordering(645) 00:16:27.695 fused_ordering(646) 00:16:27.695 fused_ordering(647) 00:16:27.695 fused_ordering(648) 00:16:27.695 fused_ordering(649) 00:16:27.695 fused_ordering(650) 00:16:27.695 fused_ordering(651) 00:16:27.695 fused_ordering(652) 00:16:27.695 fused_ordering(653) 00:16:27.695 fused_ordering(654) 00:16:27.695 fused_ordering(655) 00:16:27.695 fused_ordering(656) 00:16:27.695 fused_ordering(657) 00:16:27.695 fused_ordering(658) 00:16:27.695 fused_ordering(659) 00:16:27.695 fused_ordering(660) 00:16:27.695 fused_ordering(661) 00:16:27.695 fused_ordering(662) 00:16:27.695 fused_ordering(663) 00:16:27.695 fused_ordering(664) 00:16:27.695 fused_ordering(665) 00:16:27.695 fused_ordering(666) 00:16:27.695 fused_ordering(667) 00:16:27.695 fused_ordering(668) 00:16:27.695 fused_ordering(669) 00:16:27.695 fused_ordering(670) 00:16:27.695 fused_ordering(671) 00:16:27.695 fused_ordering(672) 00:16:27.695 fused_ordering(673) 00:16:27.695 fused_ordering(674) 00:16:27.695 fused_ordering(675) 00:16:27.695 fused_ordering(676) 00:16:27.695 fused_ordering(677) 00:16:27.695 fused_ordering(678) 00:16:27.695 fused_ordering(679) 00:16:27.695 fused_ordering(680) 00:16:27.695 fused_ordering(681) 00:16:27.695 fused_ordering(682) 00:16:27.695 fused_ordering(683) 00:16:27.695 fused_ordering(684) 00:16:27.695 fused_ordering(685) 00:16:27.695 fused_ordering(686) 00:16:27.695 fused_ordering(687) 00:16:27.695 fused_ordering(688) 00:16:27.695 fused_ordering(689) 00:16:27.695 fused_ordering(690) 00:16:27.695 fused_ordering(691) 00:16:27.695 fused_ordering(692) 00:16:27.695 fused_ordering(693) 00:16:27.695 fused_ordering(694) 00:16:27.695 fused_ordering(695) 00:16:27.695 fused_ordering(696) 00:16:27.695 fused_ordering(697) 00:16:27.695 fused_ordering(698) 00:16:27.695 fused_ordering(699) 00:16:27.695 fused_ordering(700) 00:16:27.695 fused_ordering(701) 00:16:27.695 fused_ordering(702) 00:16:27.695 fused_ordering(703) 00:16:27.695 fused_ordering(704) 00:16:27.695 fused_ordering(705) 00:16:27.695 fused_ordering(706) 00:16:27.695 fused_ordering(707) 00:16:27.695 fused_ordering(708) 00:16:27.695 fused_ordering(709) 00:16:27.695 fused_ordering(710) 00:16:27.695 fused_ordering(711) 00:16:27.695 fused_ordering(712) 00:16:27.695 fused_ordering(713) 00:16:27.695 fused_ordering(714) 00:16:27.695 fused_ordering(715) 00:16:27.695 fused_ordering(716) 00:16:27.695 fused_ordering(717) 00:16:27.695 fused_ordering(718) 00:16:27.695 fused_ordering(719) 00:16:27.695 fused_ordering(720) 00:16:27.695 fused_ordering(721) 00:16:27.695 fused_ordering(722) 00:16:27.695 fused_ordering(723) 00:16:27.695 fused_ordering(724) 00:16:27.695 fused_ordering(725) 00:16:27.695 fused_ordering(726) 00:16:27.695 fused_ordering(727) 00:16:27.695 fused_ordering(728) 00:16:27.695 fused_ordering(729) 00:16:27.695 fused_ordering(730) 00:16:27.695 fused_ordering(731) 00:16:27.695 fused_ordering(732) 00:16:27.695 fused_ordering(733) 00:16:27.695 fused_ordering(734) 00:16:27.695 fused_ordering(735) 00:16:27.695 fused_ordering(736) 00:16:27.695 fused_ordering(737) 00:16:27.695 fused_ordering(738) 00:16:27.695 fused_ordering(739) 00:16:27.695 fused_ordering(740) 00:16:27.695 fused_ordering(741) 00:16:27.695 fused_ordering(742) 00:16:27.695 fused_ordering(743) 00:16:27.695 fused_ordering(744) 00:16:27.695 fused_ordering(745) 00:16:27.695 fused_ordering(746) 00:16:27.695 fused_ordering(747) 00:16:27.695 fused_ordering(748) 00:16:27.695 fused_ordering(749) 00:16:27.695 fused_ordering(750) 00:16:27.695 fused_ordering(751) 00:16:27.695 fused_ordering(752) 00:16:27.695 fused_ordering(753) 00:16:27.695 fused_ordering(754) 00:16:27.695 fused_ordering(755) 00:16:27.695 fused_ordering(756) 00:16:27.695 fused_ordering(757) 00:16:27.695 fused_ordering(758) 00:16:27.695 fused_ordering(759) 00:16:27.695 fused_ordering(760) 00:16:27.695 fused_ordering(761) 00:16:27.695 fused_ordering(762) 00:16:27.695 fused_ordering(763) 00:16:27.695 fused_ordering(764) 00:16:27.695 fused_ordering(765) 00:16:27.695 fused_ordering(766) 00:16:27.695 fused_ordering(767) 00:16:27.695 fused_ordering(768) 00:16:27.695 fused_ordering(769) 00:16:27.695 fused_ordering(770) 00:16:27.695 fused_ordering(771) 00:16:27.695 fused_ordering(772) 00:16:27.695 fused_ordering(773) 00:16:27.695 fused_ordering(774) 00:16:27.695 fused_ordering(775) 00:16:27.695 fused_ordering(776) 00:16:27.695 fused_ordering(777) 00:16:27.695 fused_ordering(778) 00:16:27.695 fused_ordering(779) 00:16:27.695 fused_ordering(780) 00:16:27.695 fused_ordering(781) 00:16:27.695 fused_ordering(782) 00:16:27.695 fused_ordering(783) 00:16:27.695 fused_ordering(784) 00:16:27.695 fused_ordering(785) 00:16:27.695 fused_ordering(786) 00:16:27.695 fused_ordering(787) 00:16:27.695 fused_ordering(788) 00:16:27.695 fused_ordering(789) 00:16:27.695 fused_ordering(790) 00:16:27.695 fused_ordering(791) 00:16:27.695 fused_ordering(792) 00:16:27.695 fused_ordering(793) 00:16:27.695 fused_ordering(794) 00:16:27.695 fused_ordering(795) 00:16:27.695 fused_ordering(796) 00:16:27.695 fused_ordering(797) 00:16:27.695 fused_ordering(798) 00:16:27.695 fused_ordering(799) 00:16:27.695 fused_ordering(800) 00:16:27.695 fused_ordering(801) 00:16:27.695 fused_ordering(802) 00:16:27.695 fused_ordering(803) 00:16:27.695 fused_ordering(804) 00:16:27.695 fused_ordering(805) 00:16:27.695 fused_ordering(806) 00:16:27.695 fused_ordering(807) 00:16:27.695 fused_ordering(808) 00:16:27.695 fused_ordering(809) 00:16:27.695 fused_ordering(810) 00:16:27.695 fused_ordering(811) 00:16:27.695 fused_ordering(812) 00:16:27.695 fused_ordering(813) 00:16:27.695 fused_ordering(814) 00:16:27.695 fused_ordering(815) 00:16:27.695 fused_ordering(816) 00:16:27.695 fused_ordering(817) 00:16:27.695 fused_ordering(818) 00:16:27.695 fused_ordering(819) 00:16:27.695 fused_ordering(820) 00:16:28.633 fused_ordering(821) 00:16:28.633 fused_ordering(822) 00:16:28.633 fused_ordering(823) 00:16:28.633 fused_ordering(824) 00:16:28.633 fused_ordering(825) 00:16:28.633 fused_ordering(826) 00:16:28.633 fused_ordering(827) 00:16:28.633 fused_ordering(828) 00:16:28.633 fused_ordering(829) 00:16:28.633 fused_ordering(830) 00:16:28.633 fused_ordering(831) 00:16:28.633 fused_ordering(832) 00:16:28.633 fused_ordering(833) 00:16:28.633 fused_ordering(834) 00:16:28.633 fused_ordering(835) 00:16:28.633 fused_ordering(836) 00:16:28.633 fused_ordering(837) 00:16:28.633 fused_ordering(838) 00:16:28.633 fused_ordering(839) 00:16:28.633 fused_ordering(840) 00:16:28.633 fused_ordering(841) 00:16:28.633 fused_ordering(842) 00:16:28.633 fused_ordering(843) 00:16:28.633 fused_ordering(844) 00:16:28.633 fused_ordering(845) 00:16:28.633 fused_ordering(846) 00:16:28.633 fused_ordering(847) 00:16:28.633 fused_ordering(848) 00:16:28.633 fused_ordering(849) 00:16:28.633 fused_ordering(850) 00:16:28.633 fused_ordering(851) 00:16:28.633 fused_ordering(852) 00:16:28.633 fused_ordering(853) 00:16:28.633 fused_ordering(854) 00:16:28.633 fused_ordering(855) 00:16:28.633 fused_ordering(856) 00:16:28.633 fused_ordering(857) 00:16:28.633 fused_ordering(858) 00:16:28.633 fused_ordering(859) 00:16:28.633 fused_ordering(860) 00:16:28.633 fused_ordering(861) 00:16:28.633 fused_ordering(862) 00:16:28.633 fused_ordering(863) 00:16:28.633 fused_ordering(864) 00:16:28.633 fused_ordering(865) 00:16:28.633 fused_ordering(866) 00:16:28.633 fused_ordering(867) 00:16:28.633 fused_ordering(868) 00:16:28.633 fused_ordering(869) 00:16:28.633 fused_ordering(870) 00:16:28.633 fused_ordering(871) 00:16:28.633 fused_ordering(872) 00:16:28.633 fused_ordering(873) 00:16:28.633 fused_ordering(874) 00:16:28.633 fused_ordering(875) 00:16:28.633 fused_ordering(876) 00:16:28.633 fused_ordering(877) 00:16:28.633 fused_ordering(878) 00:16:28.633 fused_ordering(879) 00:16:28.633 fused_ordering(880) 00:16:28.633 fused_ordering(881) 00:16:28.633 fused_ordering(882) 00:16:28.633 fused_ordering(883) 00:16:28.633 fused_ordering(884) 00:16:28.633 fused_ordering(885) 00:16:28.633 fused_ordering(886) 00:16:28.633 fused_ordering(887) 00:16:28.633 fused_ordering(888) 00:16:28.633 fused_ordering(889) 00:16:28.633 fused_ordering(890) 00:16:28.633 fused_ordering(891) 00:16:28.633 fused_ordering(892) 00:16:28.633 fused_ordering(893) 00:16:28.633 fused_ordering(894) 00:16:28.633 fused_ordering(895) 00:16:28.633 fused_ordering(896) 00:16:28.633 fused_ordering(897) 00:16:28.633 fused_ordering(898) 00:16:28.633 fused_ordering(899) 00:16:28.633 fused_ordering(900) 00:16:28.633 fused_ordering(901) 00:16:28.633 fused_ordering(902) 00:16:28.633 fused_ordering(903) 00:16:28.633 fused_ordering(904) 00:16:28.633 fused_ordering(905) 00:16:28.633 fused_ordering(906) 00:16:28.633 fused_ordering(907) 00:16:28.633 fused_ordering(908) 00:16:28.633 fused_ordering(909) 00:16:28.633 fused_ordering(910) 00:16:28.633 fused_ordering(911) 00:16:28.633 fused_ordering(912) 00:16:28.633 fused_ordering(913) 00:16:28.633 fused_ordering(914) 00:16:28.633 fused_ordering(915) 00:16:28.633 fused_ordering(916) 00:16:28.633 fused_ordering(917) 00:16:28.633 fused_ordering(918) 00:16:28.633 fused_ordering(919) 00:16:28.633 fused_ordering(920) 00:16:28.633 fused_ordering(921) 00:16:28.633 fused_ordering(922) 00:16:28.633 fused_ordering(923) 00:16:28.633 fused_ordering(924) 00:16:28.633 fused_ordering(925) 00:16:28.633 fused_ordering(926) 00:16:28.633 fused_ordering(927) 00:16:28.633 fused_ordering(928) 00:16:28.633 fused_ordering(929) 00:16:28.633 fused_ordering(930) 00:16:28.633 fused_ordering(931) 00:16:28.633 fused_ordering(932) 00:16:28.633 fused_ordering(933) 00:16:28.633 fused_ordering(934) 00:16:28.633 fused_ordering(935) 00:16:28.633 fused_ordering(936) 00:16:28.633 fused_ordering(937) 00:16:28.633 fused_ordering(938) 00:16:28.633 fused_ordering(939) 00:16:28.633 fused_ordering(940) 00:16:28.633 fused_ordering(941) 00:16:28.633 fused_ordering(942) 00:16:28.633 fused_ordering(943) 00:16:28.633 fused_ordering(944) 00:16:28.633 fused_ordering(945) 00:16:28.633 fused_ordering(946) 00:16:28.633 fused_ordering(947) 00:16:28.633 fused_ordering(948) 00:16:28.633 fused_ordering(949) 00:16:28.633 fused_ordering(950) 00:16:28.633 fused_ordering(951) 00:16:28.633 fused_ordering(952) 00:16:28.633 fused_ordering(953) 00:16:28.633 fused_ordering(954) 00:16:28.633 fused_ordering(955) 00:16:28.633 fused_ordering(956) 00:16:28.633 fused_ordering(957) 00:16:28.633 fused_ordering(958) 00:16:28.633 fused_ordering(959) 00:16:28.633 fused_ordering(960) 00:16:28.633 fused_ordering(961) 00:16:28.633 fused_ordering(962) 00:16:28.633 fused_ordering(963) 00:16:28.633 fused_ordering(964) 00:16:28.633 fused_ordering(965) 00:16:28.633 fused_ordering(966) 00:16:28.633 fused_ordering(967) 00:16:28.633 fused_ordering(968) 00:16:28.633 fused_ordering(969) 00:16:28.633 fused_ordering(970) 00:16:28.633 fused_ordering(971) 00:16:28.633 fused_ordering(972) 00:16:28.633 fused_ordering(973) 00:16:28.633 fused_ordering(974) 00:16:28.633 fused_ordering(975) 00:16:28.633 fused_ordering(976) 00:16:28.633 fused_ordering(977) 00:16:28.633 fused_ordering(978) 00:16:28.633 fused_ordering(979) 00:16:28.633 fused_ordering(980) 00:16:28.633 fused_ordering(981) 00:16:28.633 fused_ordering(982) 00:16:28.633 fused_ordering(983) 00:16:28.633 fused_ordering(984) 00:16:28.633 fused_ordering(985) 00:16:28.633 fused_ordering(986) 00:16:28.633 fused_ordering(987) 00:16:28.633 fused_ordering(988) 00:16:28.633 fused_ordering(989) 00:16:28.633 fused_ordering(990) 00:16:28.633 fused_ordering(991) 00:16:28.633 fused_ordering(992) 00:16:28.633 fused_ordering(993) 00:16:28.633 fused_ordering(994) 00:16:28.633 fused_ordering(995) 00:16:28.633 fused_ordering(996) 00:16:28.633 fused_ordering(997) 00:16:28.633 fused_ordering(998) 00:16:28.633 fused_ordering(999) 00:16:28.633 fused_ordering(1000) 00:16:28.633 fused_ordering(1001) 00:16:28.633 fused_ordering(1002) 00:16:28.633 fused_ordering(1003) 00:16:28.633 fused_ordering(1004) 00:16:28.633 fused_ordering(1005) 00:16:28.633 fused_ordering(1006) 00:16:28.633 fused_ordering(1007) 00:16:28.633 fused_ordering(1008) 00:16:28.633 fused_ordering(1009) 00:16:28.633 fused_ordering(1010) 00:16:28.633 fused_ordering(1011) 00:16:28.633 fused_ordering(1012) 00:16:28.633 fused_ordering(1013) 00:16:28.633 fused_ordering(1014) 00:16:28.633 fused_ordering(1015) 00:16:28.633 fused_ordering(1016) 00:16:28.633 fused_ordering(1017) 00:16:28.633 fused_ordering(1018) 00:16:28.633 fused_ordering(1019) 00:16:28.633 fused_ordering(1020) 00:16:28.633 fused_ordering(1021) 00:16:28.633 fused_ordering(1022) 00:16:28.633 fused_ordering(1023) 00:16:28.633 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:16:28.633 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:16:28.633 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:28.633 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:16:28.633 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:28.633 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:28.634 rmmod nvme_tcp 00:16:28.634 rmmod nvme_fabrics 00:16:28.634 rmmod nvme_keyring 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3755925 ']' 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3755925 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3755925 ']' 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3755925 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3755925 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3755925' 00:16:28.634 killing process with pid 3755925 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3755925 00:16:28.634 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3755925 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:28.892 09:02:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.799 00:16:30.799 real 0m7.757s 00:16:30.799 user 0m5.262s 00:16:30.799 sys 0m3.530s 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:16:30.799 ************************************ 00:16:30.799 END TEST nvmf_fused_ordering 00:16:30.799 ************************************ 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.799 09:02:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:31.057 ************************************ 00:16:31.057 START TEST nvmf_ns_masking 00:16:31.057 ************************************ 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:16:31.057 * Looking for test storage... 00:16:31.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:31.057 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:16:31.058 09:02:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=09320381-bd07-4e18-8eb4-1c74c31e6d29 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f7f26b47-cdf4-4889-bde6-46fe9cd366ba 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=cce66c54-ee69-4a1f-a8e3-6bb031bb9582 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:16:31.058 09:02:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:32.962 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:32.962 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:32.962 Found net devices under 0000:09:00.0: cvl_0_0 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:32.962 Found net devices under 0000:09:00.1: cvl_0_1 00:16:32.962 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:32.963 09:02:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:32.963 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:32.963 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:32.963 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:32.963 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:33.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:16:33.222 00:16:33.222 --- 10.0.0.2 ping statistics --- 00:16:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.222 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:33.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:16:33.222 00:16:33.222 --- 10.0.0.1 ping statistics --- 00:16:33.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.222 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3758272 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3758272 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3758272 ']' 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.222 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.222 [2024-07-24 09:02:11.178223] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:33.222 [2024-07-24 09:02:11.178312] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.222 EAL: No free 2048 kB hugepages reported on node 1 00:16:33.222 [2024-07-24 09:02:11.216598] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:33.222 [2024-07-24 09:02:11.243284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.222 [2024-07-24 09:02:11.330280] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.222 [2024-07-24 09:02:11.330335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.222 [2024-07-24 09:02:11.330363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:33.222 [2024-07-24 09:02:11.330375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:33.222 [2024-07-24 09:02:11.330385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.222 [2024-07-24 09:02:11.330411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:33.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:33.739 [2024-07-24 09:02:11.693664] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:33.739 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:16:33.739 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:16:33.739 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:33.997 Malloc1 00:16:33.997 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:34.255 Malloc2 00:16:34.255 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:34.514 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:34.775 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:35.035 [2024-07-24 09:02:13.038717] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.035 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:16:35.035 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cce66c54-ee69-4a1f-a8e3-6bb031bb9582 -a 10.0.0.2 -s 4420 -i 4 00:16:35.295 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:16:35.295 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:16:35.295 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:35.295 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:16:35.295 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:37.201 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:37.459 [ 0]:0x1 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3408d12c31745749c361cd2ecd5e89a 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3408d12c31745749c361cd2ecd5e89a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.459 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:37.718 [ 0]:0x1 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3408d12c31745749c361cd2ecd5e89a 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3408d12c31745749c361cd2ecd5e89a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:37.718 [ 1]:0x2 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:37.718 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:37.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:37.976 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:38.234 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cce66c54-ee69-4a1f-a8e3-6bb031bb9582 -a 10.0.0.2 -s 4420 -i 4 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:16:38.492 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.056 [ 0]:0x2 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.056 09:02:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.056 [ 0]:0x1 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3408d12c31745749c361cd2ecd5e89a 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3408d12c31745749c361cd2ecd5e89a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.056 [ 1]:0x2 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.056 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:41.314 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:41.315 [ 0]:0x2 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:41.315 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:41.573 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:41.573 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:41.573 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:41.573 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:41.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.573 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:41.832 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:41.832 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cce66c54-ee69-4a1f-a8e3-6bb031bb9582 -a 10.0.0.2 -s 4420 -i 4 00:16:42.091 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:42.091 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:16:42.091 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:42.091 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:16:42.091 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:16:42.091 09:02:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:43.996 09:02:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:43.996 [ 0]:0x1 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e3408d12c31745749c361cd2ecd5e89a 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e3408d12c31745749c361cd2ecd5e89a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:43.996 [ 1]:0x2 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:43.996 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.254 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:44.255 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.255 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:44.513 [ 0]:0x2 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:44.513 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:44.771 [2024-07-24 09:02:22.753925] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:44.771 request: 00:16:44.771 { 00:16:44.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.771 "nsid": 2, 00:16:44.771 "host": "nqn.2016-06.io.spdk:host1", 00:16:44.771 "method": "nvmf_ns_remove_host", 00:16:44.771 "req_id": 1 00:16:44.771 } 00:16:44.771 Got JSON-RPC error response 00:16:44.771 response: 00:16:44.771 { 00:16:44.771 "code": -32602, 00:16:44.771 "message": "Invalid parameters" 00:16:44.771 } 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:16:44.771 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:44.772 [ 0]:0x2 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ed095cc6478a4d4a9b0b62400561dc9b 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ed095cc6478a4d4a9b0b62400561dc9b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:44.772 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:45.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3759776 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3759776 /var/tmp/host.sock 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3759776 ']' 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:45.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:45.032 09:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:45.032 [2024-07-24 09:02:22.945659] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:45.032 [2024-07-24 09:02:22.945736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3759776 ] 00:16:45.032 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.032 [2024-07-24 09:02:22.976727] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:45.032 [2024-07-24 09:02:23.008471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.032 [2024-07-24 09:02:23.103797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:45.291 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:45.291 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:16:45.291 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:45.856 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:45.856 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 09320381-bd07-4e18-8eb4-1c74c31e6d29 00:16:45.856 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:45.856 09:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09320381BD074E188EB41C74C31E6D29 -i 00:16:46.114 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f7f26b47-cdf4-4889-bde6-46fe9cd366ba 00:16:46.114 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:16:46.114 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F7F26B47CDF44889BDE646FE9CD366BA -i 00:16:46.372 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:46.630 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:46.887 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:46.887 09:02:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:47.452 nvme0n1 00:16:47.452 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:47.452 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:47.710 nvme1n2 00:16:47.710 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:47.710 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:47.710 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:47.710 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:47.710 09:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:47.968 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:47.968 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:47.968 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:47.968 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:48.226 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 09320381-bd07-4e18-8eb4-1c74c31e6d29 == \0\9\3\2\0\3\8\1\-\b\d\0\7\-\4\e\1\8\-\8\e\b\4\-\1\c\7\4\c\3\1\e\6\d\2\9 ]] 00:16:48.226 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:48.226 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:48.226 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f7f26b47-cdf4-4889-bde6-46fe9cd366ba == \f\7\f\2\6\b\4\7\-\c\d\f\4\-\4\8\8\9\-\b\d\e\6\-\4\6\f\e\9\c\d\3\6\6\b\a ]] 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3759776 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3759776 ']' 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3759776 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3759776 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3759776' 00:16:48.485 killing process with pid 3759776 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3759776 00:16:48.485 09:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3759776 00:16:49.053 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:49.311 rmmod nvme_tcp 00:16:49.311 rmmod nvme_fabrics 00:16:49.311 rmmod nvme_keyring 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3758272 ']' 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3758272 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3758272 ']' 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3758272 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3758272 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3758272' 00:16:49.311 killing process with pid 3758272 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3758272 00:16:49.311 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3758272 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:49.570 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.109 00:16:52.109 real 0m20.778s 00:16:52.109 user 0m27.065s 00:16:52.109 sys 0m4.145s 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:52.109 ************************************ 00:16:52.109 END TEST nvmf_ns_masking 00:16:52.109 ************************************ 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:52.109 ************************************ 00:16:52.109 START TEST nvmf_nvme_cli 00:16:52.109 ************************************ 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:52.109 * Looking for test storage... 00:16:52.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.109 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.110 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:54.012 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:54.012 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:54.012 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:54.013 Found net devices under 0000:09:00.0: cvl_0_0 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:54.013 Found net devices under 0000:09:00.1: cvl_0_1 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:54.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:54.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:16:54.013 00:16:54.013 --- 10.0.0.2 ping statistics --- 00:16:54.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.013 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:54.013 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:54.013 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:16:54.013 00:16:54.013 --- 10.0.0.1 ping statistics --- 00:16:54.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:54.013 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3762261 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3762261 00:16:54.013 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3762261 ']' 00:16:54.013 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.013 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.013 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.013 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.013 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.013 [2024-07-24 09:02:32.045802] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:16:54.013 [2024-07-24 09:02:32.045879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.013 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.013 [2024-07-24 09:02:32.085221] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:54.013 [2024-07-24 09:02:32.117623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:54.271 [2024-07-24 09:02:32.214140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:54.271 [2024-07-24 09:02:32.214203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:54.271 [2024-07-24 09:02:32.214220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:54.271 [2024-07-24 09:02:32.214233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:54.271 [2024-07-24 09:02:32.214245] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:54.271 [2024-07-24 09:02:32.214308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:54.271 [2024-07-24 09:02:32.214363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:54.271 [2024-07-24 09:02:32.214424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:54.271 [2024-07-24 09:02:32.214427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.271 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:54.271 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:16:54.271 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:54.271 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:54.271 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.272 [2024-07-24 09:02:32.375555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.272 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 Malloc0 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 Malloc1 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 [2024-07-24 09:02:32.456921] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:16:54.531 00:16:54.531 Discovery Log Number of Records 2, Generation counter 2 00:16:54.531 =====Discovery Log Entry 0====== 00:16:54.531 trtype: tcp 00:16:54.531 adrfam: ipv4 00:16:54.531 subtype: current discovery subsystem 00:16:54.531 treq: not required 00:16:54.531 portid: 0 00:16:54.531 trsvcid: 4420 00:16:54.531 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:54.531 traddr: 10.0.0.2 00:16:54.531 eflags: explicit discovery connections, duplicate discovery information 00:16:54.531 sectype: none 00:16:54.531 =====Discovery Log Entry 1====== 00:16:54.531 trtype: tcp 00:16:54.531 adrfam: ipv4 00:16:54.531 subtype: nvme subsystem 00:16:54.531 treq: not required 00:16:54.531 portid: 0 00:16:54.531 trsvcid: 4420 00:16:54.531 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:54.531 traddr: 10.0.0.2 00:16:54.531 eflags: none 00:16:54.531 sectype: none 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:54.531 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.468 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:55.468 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:16:55.469 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.469 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:16:55.469 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:16:55.469 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:16:57.372 /dev/nvme0n1 ]] 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.372 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:57.630 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.890 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.891 rmmod nvme_tcp 00:16:57.891 rmmod nvme_fabrics 00:16:57.891 rmmod nvme_keyring 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3762261 ']' 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3762261 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3762261 ']' 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3762261 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3762261 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3762261' 00:16:57.891 killing process with pid 3762261 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3762261 00:16:57.891 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3762261 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:58.152 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.692 00:17:00.692 real 0m8.476s 00:17:00.692 user 0m16.234s 00:17:00.692 sys 0m2.226s 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 ************************************ 00:17:00.692 END TEST nvmf_nvme_cli 00:17:00.692 ************************************ 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 ************************************ 00:17:00.692 START TEST nvmf_vfio_user 00:17:00.692 ************************************ 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:17:00.692 * Looking for test storage... 00:17:00.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3763121 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3763121' 00:17:00.692 Process pid: 3763121 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3763121 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3763121 ']' 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:00.692 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:00.692 [2024-07-24 09:02:38.407173] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:17:00.692 [2024-07-24 09:02:38.407282] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.692 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.692 [2024-07-24 09:02:38.442992] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:00.692 [2024-07-24 09:02:38.474629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.693 [2024-07-24 09:02:38.569426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.693 [2024-07-24 09:02:38.569486] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.693 [2024-07-24 09:02:38.569502] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.693 [2024-07-24 09:02:38.569516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.693 [2024-07-24 09:02:38.569527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.693 [2024-07-24 09:02:38.569609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.693 [2024-07-24 09:02:38.569663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.693 [2024-07-24 09:02:38.569688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.693 [2024-07-24 09:02:38.569692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.693 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:00.693 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:00.693 09:02:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:01.631 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:17:01.925 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:01.925 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:01.925 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:01.925 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:01.925 09:02:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:02.183 Malloc1 00:17:02.183 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:02.441 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:02.699 09:02:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:02.957 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:02.957 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:02.957 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:03.216 Malloc2 00:17:03.216 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:03.474 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:03.732 09:02:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:03.992 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:17:03.992 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:17:03.992 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:03.992 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:03.992 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:17:03.992 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:03.992 [2024-07-24 09:02:42.066058] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:17:03.992 [2024-07-24 09:02:42.066122] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3763604 ] 00:17:03.992 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.992 [2024-07-24 09:02:42.082855] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:03.992 [2024-07-24 09:02:42.100407] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:17:03.992 [2024-07-24 09:02:42.105851] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:03.992 [2024-07-24 09:02:42.105882] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7faf5f26c000 00:17:03.992 [2024-07-24 09:02:42.106850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:03.992 [2024-07-24 09:02:42.107846] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.108850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.109861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.110861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.111869] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.112871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.113874] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:04.253 [2024-07-24 09:02:42.114880] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:04.253 [2024-07-24 09:02:42.114899] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7faf5e02e000 00:17:04.253 [2024-07-24 09:02:42.116012] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:04.253 [2024-07-24 09:02:42.131626] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:17:04.253 [2024-07-24 09:02:42.131663] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:17:04.253 [2024-07-24 09:02:42.134013] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:04.253 [2024-07-24 09:02:42.134071] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:04.253 [2024-07-24 09:02:42.134194] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:17:04.254 [2024-07-24 09:02:42.134229] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:17:04.254 [2024-07-24 09:02:42.134240] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:17:04.254 [2024-07-24 09:02:42.135000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:17:04.254 [2024-07-24 09:02:42.135023] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:17:04.254 [2024-07-24 09:02:42.135036] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:17:04.254 [2024-07-24 09:02:42.136004] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:17:04.254 [2024-07-24 09:02:42.136022] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:17:04.254 [2024-07-24 09:02:42.136036] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:17:04.254 [2024-07-24 09:02:42.137010] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:17:04.254 [2024-07-24 09:02:42.137028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:04.254 [2024-07-24 09:02:42.138013] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:17:04.254 [2024-07-24 09:02:42.138032] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:17:04.254 [2024-07-24 09:02:42.138041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:17:04.254 [2024-07-24 09:02:42.138052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:04.254 [2024-07-24 09:02:42.138161] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:17:04.254 [2024-07-24 09:02:42.138171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:04.254 [2024-07-24 09:02:42.138185] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:17:04.254 [2024-07-24 09:02:42.139023] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:17:04.254 [2024-07-24 09:02:42.140029] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:17:04.254 [2024-07-24 09:02:42.141032] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:04.254 [2024-07-24 09:02:42.142030] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:04.254 [2024-07-24 09:02:42.142192] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:04.254 [2024-07-24 09:02:42.143048] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:17:04.254 [2024-07-24 09:02:42.143064] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:04.254 [2024-07-24 09:02:42.143073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143116] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:17:04.254 [2024-07-24 09:02:42.143132] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143163] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.254 [2024-07-24 09:02:42.143173] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.254 [2024-07-24 09:02:42.143180] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.254 [2024-07-24 09:02:42.143203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143281] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:17:04.254 [2024-07-24 09:02:42.143290] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:17:04.254 [2024-07-24 09:02:42.143298] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:17:04.254 [2024-07-24 09:02:42.143306] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:04.254 [2024-07-24 09:02:42.143314] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:17:04.254 [2024-07-24 09:02:42.143322] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:17:04.254 [2024-07-24 09:02:42.143330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143364] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.254 [2024-07-24 09:02:42.143439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.254 [2024-07-24 09:02:42.143467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.254 [2024-07-24 09:02:42.143478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:04.254 [2024-07-24 09:02:42.143487] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143537] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:17:04.254 [2024-07-24 09:02:42.143545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143572] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143585] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143660] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143676] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143689] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:04.254 [2024-07-24 09:02:42.143697] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:04.254 [2024-07-24 09:02:42.143703] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.254 [2024-07-24 09:02:42.143712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143748] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:17:04.254 [2024-07-24 09:02:42.143768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143783] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143797] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.254 [2024-07-24 09:02:42.143806] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.254 [2024-07-24 09:02:42.143812] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.254 [2024-07-24 09:02:42.143821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143870] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143884] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:04.254 [2024-07-24 09:02:42.143896] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:04.254 [2024-07-24 09:02:42.143904] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.254 [2024-07-24 09:02:42.143909] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.254 [2024-07-24 09:02:42.143919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.254 [2024-07-24 09:02:42.143932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:04.254 [2024-07-24 09:02:42.143947] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.143958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.143971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.143985] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.143993] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.144002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.144010] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:17:04.255 [2024-07-24 09:02:42.144018] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:17:04.255 [2024-07-24 09:02:42.144026] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:17:04.255 [2024-07-24 09:02:42.144055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144149] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144218] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:04.255 [2024-07-24 09:02:42.144228] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:04.255 [2024-07-24 09:02:42.144234] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:04.255 [2024-07-24 09:02:42.144241] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:04.255 [2024-07-24 09:02:42.144247] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:04.255 [2024-07-24 09:02:42.144257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:04.255 [2024-07-24 09:02:42.144268] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:04.255 [2024-07-24 09:02:42.144277] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:04.255 [2024-07-24 09:02:42.144283] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.255 [2024-07-24 09:02:42.144292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144304] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:04.255 [2024-07-24 09:02:42.144312] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:04.255 [2024-07-24 09:02:42.144318] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.255 [2024-07-24 09:02:42.144327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144340] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:04.255 [2024-07-24 09:02:42.144348] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:04.255 [2024-07-24 09:02:42.144354] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:04.255 [2024-07-24 09:02:42.144363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:04.255 [2024-07-24 09:02:42.144375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:04.255 [2024-07-24 09:02:42.144438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:04.255 ===================================================== 00:17:04.255 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:04.255 ===================================================== 00:17:04.255 Controller Capabilities/Features 00:17:04.255 ================================ 00:17:04.255 Vendor ID: 4e58 00:17:04.255 Subsystem Vendor ID: 4e58 00:17:04.255 Serial Number: SPDK1 00:17:04.255 Model Number: SPDK bdev Controller 00:17:04.255 Firmware Version: 24.09 00:17:04.255 Recommended Arb Burst: 6 00:17:04.255 IEEE OUI Identifier: 8d 6b 50 00:17:04.255 Multi-path I/O 00:17:04.255 May have multiple subsystem ports: Yes 00:17:04.255 May have multiple controllers: Yes 00:17:04.255 Associated with SR-IOV VF: No 00:17:04.255 Max Data Transfer Size: 131072 00:17:04.255 Max Number of Namespaces: 32 00:17:04.255 Max Number of I/O Queues: 127 00:17:04.255 NVMe Specification Version (VS): 1.3 00:17:04.255 NVMe Specification Version (Identify): 1.3 00:17:04.255 Maximum Queue Entries: 256 00:17:04.255 Contiguous Queues Required: Yes 00:17:04.255 Arbitration Mechanisms Supported 00:17:04.255 Weighted Round Robin: Not Supported 00:17:04.255 Vendor Specific: Not Supported 00:17:04.255 Reset Timeout: 15000 ms 00:17:04.255 Doorbell Stride: 4 bytes 00:17:04.255 NVM Subsystem Reset: Not Supported 00:17:04.255 Command Sets Supported 00:17:04.255 NVM Command Set: Supported 00:17:04.255 Boot Partition: Not Supported 00:17:04.255 Memory Page Size Minimum: 4096 bytes 00:17:04.255 Memory Page Size Maximum: 4096 bytes 00:17:04.255 Persistent Memory Region: Not Supported 00:17:04.255 Optional Asynchronous Events Supported 00:17:04.255 Namespace Attribute Notices: Supported 00:17:04.255 Firmware Activation Notices: Not Supported 00:17:04.255 ANA Change Notices: Not Supported 00:17:04.255 PLE Aggregate Log Change Notices: Not Supported 00:17:04.255 LBA Status Info Alert Notices: Not Supported 00:17:04.255 EGE Aggregate Log Change Notices: Not Supported 00:17:04.255 Normal NVM Subsystem Shutdown event: Not Supported 00:17:04.255 Zone Descriptor Change Notices: Not Supported 00:17:04.255 Discovery Log Change Notices: Not Supported 00:17:04.255 Controller Attributes 00:17:04.255 128-bit Host Identifier: Supported 00:17:04.255 Non-Operational Permissive Mode: Not Supported 00:17:04.255 NVM Sets: Not Supported 00:17:04.255 Read Recovery Levels: Not Supported 00:17:04.255 Endurance Groups: Not Supported 00:17:04.255 Predictable Latency Mode: Not Supported 00:17:04.255 Traffic Based Keep ALive: Not Supported 00:17:04.255 Namespace Granularity: Not Supported 00:17:04.255 SQ Associations: Not Supported 00:17:04.255 UUID List: Not Supported 00:17:04.255 Multi-Domain Subsystem: Not Supported 00:17:04.255 Fixed Capacity Management: Not Supported 00:17:04.255 Variable Capacity Management: Not Supported 00:17:04.255 Delete Endurance Group: Not Supported 00:17:04.255 Delete NVM Set: Not Supported 00:17:04.255 Extended LBA Formats Supported: Not Supported 00:17:04.255 Flexible Data Placement Supported: Not Supported 00:17:04.255 00:17:04.255 Controller Memory Buffer Support 00:17:04.255 ================================ 00:17:04.255 Supported: No 00:17:04.255 00:17:04.255 Persistent Memory Region Support 00:17:04.255 ================================ 00:17:04.255 Supported: No 00:17:04.255 00:17:04.255 Admin Command Set Attributes 00:17:04.255 ============================ 00:17:04.255 Security Send/Receive: Not Supported 00:17:04.255 Format NVM: Not Supported 00:17:04.255 Firmware Activate/Download: Not Supported 00:17:04.255 Namespace Management: Not Supported 00:17:04.255 Device Self-Test: Not Supported 00:17:04.255 Directives: Not Supported 00:17:04.255 NVMe-MI: Not Supported 00:17:04.255 Virtualization Management: Not Supported 00:17:04.255 Doorbell Buffer Config: Not Supported 00:17:04.255 Get LBA Status Capability: Not Supported 00:17:04.255 Command & Feature Lockdown Capability: Not Supported 00:17:04.255 Abort Command Limit: 4 00:17:04.255 Async Event Request Limit: 4 00:17:04.255 Number of Firmware Slots: N/A 00:17:04.255 Firmware Slot 1 Read-Only: N/A 00:17:04.255 Firmware Activation Without Reset: N/A 00:17:04.255 Multiple Update Detection Support: N/A 00:17:04.255 Firmware Update Granularity: No Information Provided 00:17:04.255 Per-Namespace SMART Log: No 00:17:04.255 Asymmetric Namespace Access Log Page: Not Supported 00:17:04.255 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:17:04.255 Command Effects Log Page: Supported 00:17:04.255 Get Log Page Extended Data: Supported 00:17:04.255 Telemetry Log Pages: Not Supported 00:17:04.255 Persistent Event Log Pages: Not Supported 00:17:04.256 Supported Log Pages Log Page: May Support 00:17:04.256 Commands Supported & Effects Log Page: Not Supported 00:17:04.256 Feature Identifiers & Effects Log Page:May Support 00:17:04.256 NVMe-MI Commands & Effects Log Page: May Support 00:17:04.256 Data Area 4 for Telemetry Log: Not Supported 00:17:04.256 Error Log Page Entries Supported: 128 00:17:04.256 Keep Alive: Supported 00:17:04.256 Keep Alive Granularity: 10000 ms 00:17:04.256 00:17:04.256 NVM Command Set Attributes 00:17:04.256 ========================== 00:17:04.256 Submission Queue Entry Size 00:17:04.256 Max: 64 00:17:04.256 Min: 64 00:17:04.256 Completion Queue Entry Size 00:17:04.256 Max: 16 00:17:04.256 Min: 16 00:17:04.256 Number of Namespaces: 32 00:17:04.256 Compare Command: Supported 00:17:04.256 Write Uncorrectable Command: Not Supported 00:17:04.256 Dataset Management Command: Supported 00:17:04.256 Write Zeroes Command: Supported 00:17:04.256 Set Features Save Field: Not Supported 00:17:04.256 Reservations: Not Supported 00:17:04.256 Timestamp: Not Supported 00:17:04.256 Copy: Supported 00:17:04.256 Volatile Write Cache: Present 00:17:04.256 Atomic Write Unit (Normal): 1 00:17:04.256 Atomic Write Unit (PFail): 1 00:17:04.256 Atomic Compare & Write Unit: 1 00:17:04.256 Fused Compare & Write: Supported 00:17:04.256 Scatter-Gather List 00:17:04.256 SGL Command Set: Supported (Dword aligned) 00:17:04.256 SGL Keyed: Not Supported 00:17:04.256 SGL Bit Bucket Descriptor: Not Supported 00:17:04.256 SGL Metadata Pointer: Not Supported 00:17:04.256 Oversized SGL: Not Supported 00:17:04.256 SGL Metadata Address: Not Supported 00:17:04.256 SGL Offset: Not Supported 00:17:04.256 Transport SGL Data Block: Not Supported 00:17:04.256 Replay Protected Memory Block: Not Supported 00:17:04.256 00:17:04.256 Firmware Slot Information 00:17:04.256 ========================= 00:17:04.256 Active slot: 1 00:17:04.256 Slot 1 Firmware Revision: 24.09 00:17:04.256 00:17:04.256 00:17:04.256 Commands Supported and Effects 00:17:04.256 ============================== 00:17:04.256 Admin Commands 00:17:04.256 -------------- 00:17:04.256 Get Log Page (02h): Supported 00:17:04.256 Identify (06h): Supported 00:17:04.256 Abort (08h): Supported 00:17:04.256 Set Features (09h): Supported 00:17:04.256 Get Features (0Ah): Supported 00:17:04.256 Asynchronous Event Request (0Ch): Supported 00:17:04.256 Keep Alive (18h): Supported 00:17:04.256 I/O Commands 00:17:04.256 ------------ 00:17:04.256 Flush (00h): Supported LBA-Change 00:17:04.256 Write (01h): Supported LBA-Change 00:17:04.256 Read (02h): Supported 00:17:04.256 Compare (05h): Supported 00:17:04.256 Write Zeroes (08h): Supported LBA-Change 00:17:04.256 Dataset Management (09h): Supported LBA-Change 00:17:04.256 Copy (19h): Supported LBA-Change 00:17:04.256 00:17:04.256 Error Log 00:17:04.256 ========= 00:17:04.256 00:17:04.256 Arbitration 00:17:04.256 =========== 00:17:04.256 Arbitration Burst: 1 00:17:04.256 00:17:04.256 Power Management 00:17:04.256 ================ 00:17:04.256 Number of Power States: 1 00:17:04.256 Current Power State: Power State #0 00:17:04.256 Power State #0: 00:17:04.256 Max Power: 0.00 W 00:17:04.256 Non-Operational State: Operational 00:17:04.256 Entry Latency: Not Reported 00:17:04.256 Exit Latency: Not Reported 00:17:04.256 Relative Read Throughput: 0 00:17:04.256 Relative Read Latency: 0 00:17:04.256 Relative Write Throughput: 0 00:17:04.256 Relative Write Latency: 0 00:17:04.256 Idle Power: Not Reported 00:17:04.256 Active Power: Not Reported 00:17:04.256 Non-Operational Permissive Mode: Not Supported 00:17:04.256 00:17:04.256 Health Information 00:17:04.256 ================== 00:17:04.256 Critical Warnings: 00:17:04.256 Available Spare Space: OK 00:17:04.256 Temperature: OK 00:17:04.256 Device Reliability: OK 00:17:04.256 Read Only: No 00:17:04.256 Volatile Memory Backup: OK 00:17:04.256 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:04.256 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:04.256 Available Spare: 0% 00:17:04.256 Available Sp[2024-07-24 09:02:42.144563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:04.256 [2024-07-24 09:02:42.144578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:04.256 [2024-07-24 09:02:42.144623] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:17:04.256 [2024-07-24 09:02:42.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.256 [2024-07-24 09:02:42.144655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.256 [2024-07-24 09:02:42.144665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.256 [2024-07-24 09:02:42.144675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:04.256 [2024-07-24 09:02:42.147113] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:17:04.256 [2024-07-24 09:02:42.147136] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:17:04.256 [2024-07-24 09:02:42.148068] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:04.256 [2024-07-24 09:02:42.148163] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:17:04.256 [2024-07-24 09:02:42.148178] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:17:04.256 [2024-07-24 09:02:42.149076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:17:04.256 [2024-07-24 09:02:42.149099] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:17:04.256 [2024-07-24 09:02:42.149178] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:17:04.256 [2024-07-24 09:02:42.152114] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:04.256 are Threshold: 0% 00:17:04.256 Life Percentage Used: 0% 00:17:04.256 Data Units Read: 0 00:17:04.256 Data Units Written: 0 00:17:04.256 Host Read Commands: 0 00:17:04.256 Host Write Commands: 0 00:17:04.256 Controller Busy Time: 0 minutes 00:17:04.256 Power Cycles: 0 00:17:04.256 Power On Hours: 0 hours 00:17:04.256 Unsafe Shutdowns: 0 00:17:04.256 Unrecoverable Media Errors: 0 00:17:04.256 Lifetime Error Log Entries: 0 00:17:04.256 Warning Temperature Time: 0 minutes 00:17:04.256 Critical Temperature Time: 0 minutes 00:17:04.256 00:17:04.256 Number of Queues 00:17:04.256 ================ 00:17:04.256 Number of I/O Submission Queues: 127 00:17:04.256 Number of I/O Completion Queues: 127 00:17:04.256 00:17:04.256 Active Namespaces 00:17:04.256 ================= 00:17:04.256 Namespace ID:1 00:17:04.256 Error Recovery Timeout: Unlimited 00:17:04.256 Command Set Identifier: NVM (00h) 00:17:04.256 Deallocate: Supported 00:17:04.256 Deallocated/Unwritten Error: Not Supported 00:17:04.256 Deallocated Read Value: Unknown 00:17:04.256 Deallocate in Write Zeroes: Not Supported 00:17:04.256 Deallocated Guard Field: 0xFFFF 00:17:04.256 Flush: Supported 00:17:04.256 Reservation: Supported 00:17:04.256 Namespace Sharing Capabilities: Multiple Controllers 00:17:04.256 Size (in LBAs): 131072 (0GiB) 00:17:04.256 Capacity (in LBAs): 131072 (0GiB) 00:17:04.256 Utilization (in LBAs): 131072 (0GiB) 00:17:04.256 NGUID: 29B1420C7D88490586F17B1FD1502216 00:17:04.256 UUID: 29b1420c-7d88-4905-86f1-7b1fd1502216 00:17:04.256 Thin Provisioning: Not Supported 00:17:04.256 Per-NS Atomic Units: Yes 00:17:04.256 Atomic Boundary Size (Normal): 0 00:17:04.256 Atomic Boundary Size (PFail): 0 00:17:04.256 Atomic Boundary Offset: 0 00:17:04.256 Maximum Single Source Range Length: 65535 00:17:04.256 Maximum Copy Length: 65535 00:17:04.256 Maximum Source Range Count: 1 00:17:04.256 NGUID/EUI64 Never Reused: No 00:17:04.256 Namespace Write Protected: No 00:17:04.256 Number of LBA Formats: 1 00:17:04.256 Current LBA Format: LBA Format #00 00:17:04.256 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:04.256 00:17:04.256 09:02:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:04.256 EAL: No free 2048 kB hugepages reported on node 1 00:17:04.516 [2024-07-24 09:02:42.383915] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:09.799 Initializing NVMe Controllers 00:17:09.799 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:09.799 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:09.799 Initialization complete. Launching workers. 00:17:09.799 ======================================================== 00:17:09.799 Latency(us) 00:17:09.799 Device Information : IOPS MiB/s Average min max 00:17:09.799 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35511.75 138.72 3603.82 1152.25 7376.95 00:17:09.799 ======================================================== 00:17:09.799 Total : 35511.75 138.72 3603.82 1152.25 7376.95 00:17:09.799 00:17:09.799 [2024-07-24 09:02:47.411044] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:09.799 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:09.799 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.799 [2024-07-24 09:02:47.655203] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:15.072 Initializing NVMe Controllers 00:17:15.072 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:15.072 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:17:15.072 Initialization complete. Launching workers. 00:17:15.072 ======================================================== 00:17:15.072 Latency(us) 00:17:15.072 Device Information : IOPS MiB/s Average min max 00:17:15.072 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.43 62.74 7975.12 5966.94 9983.79 00:17:15.072 ======================================================== 00:17:15.072 Total : 16060.43 62.74 7975.12 5966.94 9983.79 00:17:15.072 00:17:15.072 [2024-07-24 09:02:52.694853] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:15.072 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:15.072 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.072 [2024-07-24 09:02:52.891870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:20.346 [2024-07-24 09:02:57.971548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:20.346 Initializing NVMe Controllers 00:17:20.346 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.346 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:17:20.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:17:20.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:17:20.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:17:20.346 Initialization complete. Launching workers. 00:17:20.346 Starting thread on core 2 00:17:20.346 Starting thread on core 3 00:17:20.346 Starting thread on core 1 00:17:20.347 09:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:17:20.347 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.347 [2024-07-24 09:02:58.279578] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.632 [2024-07-24 09:03:01.339726] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.632 Initializing NVMe Controllers 00:17:23.632 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.632 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.632 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:17:23.632 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:17:23.632 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:17:23.632 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:17:23.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:23.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:23.632 Initialization complete. Launching workers. 00:17:23.632 Starting thread on core 1 with urgent priority queue 00:17:23.632 Starting thread on core 2 with urgent priority queue 00:17:23.632 Starting thread on core 3 with urgent priority queue 00:17:23.632 Starting thread on core 0 with urgent priority queue 00:17:23.632 SPDK bdev Controller (SPDK1 ) core 0: 5041.33 IO/s 19.84 secs/100000 ios 00:17:23.632 SPDK bdev Controller (SPDK1 ) core 1: 5216.33 IO/s 19.17 secs/100000 ios 00:17:23.632 SPDK bdev Controller (SPDK1 ) core 2: 5274.00 IO/s 18.96 secs/100000 ios 00:17:23.632 SPDK bdev Controller (SPDK1 ) core 3: 5626.00 IO/s 17.77 secs/100000 ios 00:17:23.632 ======================================================== 00:17:23.632 00:17:23.632 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:23.632 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.632 [2024-07-24 09:03:01.639692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:23.632 Initializing NVMe Controllers 00:17:23.632 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.632 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:23.632 Namespace ID: 1 size: 0GB 00:17:23.632 Initialization complete. 00:17:23.632 INFO: using host memory buffer for IO 00:17:23.632 Hello world! 00:17:23.633 [2024-07-24 09:03:01.675284] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:23.633 09:03:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:17:23.890 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.890 [2024-07-24 09:03:01.969586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.268 Initializing NVMe Controllers 00:17:25.268 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.268 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.268 Initialization complete. Launching workers. 00:17:25.268 submit (in ns) avg, min, max = 7247.1, 3541.1, 4023906.7 00:17:25.268 complete (in ns) avg, min, max = 25266.8, 2072.2, 4016402.2 00:17:25.268 00:17:25.268 Submit histogram 00:17:25.268 ================ 00:17:25.268 Range in us Cumulative Count 00:17:25.268 3.532 - 3.556: 0.5158% ( 69) 00:17:25.268 3.556 - 3.579: 3.1397% ( 351) 00:17:25.268 3.579 - 3.603: 7.1840% ( 541) 00:17:25.268 3.603 - 3.627: 15.9004% ( 1166) 00:17:25.268 3.627 - 3.650: 24.0936% ( 1096) 00:17:25.268 3.650 - 3.674: 33.9688% ( 1321) 00:17:25.268 3.674 - 3.698: 40.8388% ( 919) 00:17:25.268 3.698 - 3.721: 48.8749% ( 1075) 00:17:25.268 3.721 - 3.745: 54.9376% ( 811) 00:17:25.268 3.745 - 3.769: 61.3217% ( 854) 00:17:25.268 3.769 - 3.793: 65.6724% ( 582) 00:17:25.268 3.793 - 3.816: 69.1784% ( 469) 00:17:25.268 3.816 - 3.840: 72.3406% ( 423) 00:17:25.268 3.840 - 3.864: 75.5850% ( 434) 00:17:25.268 3.864 - 3.887: 78.7097% ( 418) 00:17:25.268 3.887 - 3.911: 82.1784% ( 464) 00:17:25.268 3.911 - 3.935: 84.7948% ( 350) 00:17:25.268 3.935 - 3.959: 87.0076% ( 296) 00:17:25.268 3.959 - 3.982: 88.7493% ( 233) 00:17:25.268 3.982 - 4.006: 90.5659% ( 243) 00:17:25.268 4.006 - 4.030: 91.9788% ( 189) 00:17:25.268 4.030 - 4.053: 93.2496% ( 170) 00:17:25.268 4.053 - 4.077: 94.3859% ( 152) 00:17:25.268 4.077 - 4.101: 95.1409% ( 101) 00:17:25.268 4.101 - 4.124: 95.7315% ( 79) 00:17:25.268 4.124 - 4.148: 96.1277% ( 53) 00:17:25.268 4.148 - 4.172: 96.3071% ( 24) 00:17:25.268 4.172 - 4.196: 96.4491% ( 19) 00:17:25.268 4.196 - 4.219: 96.5538% ( 14) 00:17:25.268 4.219 - 4.243: 96.6734% ( 16) 00:17:25.268 4.243 - 4.267: 96.7407% ( 9) 00:17:25.268 4.267 - 4.290: 96.8154% ( 10) 00:17:25.268 4.290 - 4.314: 96.9126% ( 13) 00:17:25.268 4.314 - 4.338: 96.9724% ( 8) 00:17:25.268 4.338 - 4.361: 96.9948% ( 3) 00:17:25.268 4.361 - 4.385: 97.0247% ( 4) 00:17:25.268 4.385 - 4.409: 97.0397% ( 2) 00:17:25.268 4.409 - 4.433: 97.0696% ( 4) 00:17:25.268 4.433 - 4.456: 97.0845% ( 2) 00:17:25.268 4.456 - 4.480: 97.1070% ( 3) 00:17:25.268 4.480 - 4.504: 97.1219% ( 2) 00:17:25.268 4.504 - 4.527: 97.1294% ( 1) 00:17:25.268 4.527 - 4.551: 97.1593% ( 4) 00:17:25.268 4.551 - 4.575: 97.1892% ( 4) 00:17:25.268 4.575 - 4.599: 97.2042% ( 2) 00:17:25.268 4.599 - 4.622: 97.2565% ( 7) 00:17:25.268 4.622 - 4.646: 97.2939% ( 5) 00:17:25.268 4.646 - 4.670: 97.3387% ( 6) 00:17:25.268 4.670 - 4.693: 97.3611% ( 3) 00:17:25.268 4.693 - 4.717: 97.4135% ( 7) 00:17:25.268 4.717 - 4.741: 97.4359% ( 3) 00:17:25.268 4.741 - 4.764: 97.4808% ( 6) 00:17:25.268 4.764 - 4.788: 97.5256% ( 6) 00:17:25.268 4.788 - 4.812: 97.5929% ( 9) 00:17:25.268 4.812 - 4.836: 97.6676% ( 10) 00:17:25.268 4.836 - 4.859: 97.6826% ( 2) 00:17:25.268 4.859 - 4.883: 97.7499% ( 9) 00:17:25.268 4.883 - 4.907: 97.7648% ( 2) 00:17:25.268 4.907 - 4.930: 97.7723% ( 1) 00:17:25.268 4.930 - 4.954: 97.7947% ( 3) 00:17:25.268 4.954 - 4.978: 97.8321% ( 5) 00:17:25.268 4.978 - 5.001: 97.8471% ( 2) 00:17:25.268 5.001 - 5.025: 97.8545% ( 1) 00:17:25.268 5.025 - 5.049: 97.8770% ( 3) 00:17:25.268 5.049 - 5.073: 97.8844% ( 1) 00:17:25.268 5.073 - 5.096: 97.9069% ( 3) 00:17:25.268 5.096 - 5.120: 97.9143% ( 1) 00:17:25.268 5.120 - 5.144: 97.9218% ( 1) 00:17:25.268 5.144 - 5.167: 97.9293% ( 1) 00:17:25.268 5.262 - 5.286: 97.9368% ( 1) 00:17:25.268 5.310 - 5.333: 97.9442% ( 1) 00:17:25.268 5.381 - 5.404: 97.9592% ( 2) 00:17:25.268 5.452 - 5.476: 97.9667% ( 1) 00:17:25.268 5.665 - 5.689: 97.9741% ( 1) 00:17:25.268 5.713 - 5.736: 97.9891% ( 2) 00:17:25.268 5.736 - 5.760: 98.0040% ( 2) 00:17:25.268 5.831 - 5.855: 98.0115% ( 1) 00:17:25.268 5.950 - 5.973: 98.0190% ( 1) 00:17:25.268 6.068 - 6.116: 98.0265% ( 1) 00:17:25.268 6.116 - 6.163: 98.0339% ( 1) 00:17:25.268 6.210 - 6.258: 98.0414% ( 1) 00:17:25.268 6.258 - 6.305: 98.0489% ( 1) 00:17:25.268 6.400 - 6.447: 98.0564% ( 1) 00:17:25.268 6.447 - 6.495: 98.0638% ( 1) 00:17:25.268 6.495 - 6.542: 98.0713% ( 1) 00:17:25.268 6.542 - 6.590: 98.0788% ( 1) 00:17:25.268 6.637 - 6.684: 98.0863% ( 1) 00:17:25.268 6.684 - 6.732: 98.0937% ( 1) 00:17:25.268 6.827 - 6.874: 98.1012% ( 1) 00:17:25.268 6.874 - 6.921: 98.1162% ( 2) 00:17:25.268 6.969 - 7.016: 98.1236% ( 1) 00:17:25.268 7.016 - 7.064: 98.1311% ( 1) 00:17:25.268 7.064 - 7.111: 98.1535% ( 3) 00:17:25.268 7.159 - 7.206: 98.1834% ( 4) 00:17:25.268 7.301 - 7.348: 98.1984% ( 2) 00:17:25.268 7.348 - 7.396: 98.2134% ( 2) 00:17:25.268 7.443 - 7.490: 98.2208% ( 1) 00:17:25.268 7.490 - 7.538: 98.2507% ( 4) 00:17:25.268 7.538 - 7.585: 98.2657% ( 2) 00:17:25.268 7.585 - 7.633: 98.2806% ( 2) 00:17:25.268 7.633 - 7.680: 98.2956% ( 2) 00:17:25.268 7.680 - 7.727: 98.3105% ( 2) 00:17:25.268 7.727 - 7.775: 98.3180% ( 1) 00:17:25.268 7.822 - 7.870: 98.3255% ( 1) 00:17:25.268 7.870 - 7.917: 98.3330% ( 1) 00:17:25.268 7.917 - 7.964: 98.3629% ( 4) 00:17:25.268 7.964 - 8.012: 98.3703% ( 1) 00:17:25.268 8.059 - 8.107: 98.3853% ( 2) 00:17:25.268 8.107 - 8.154: 98.3928% ( 1) 00:17:25.268 8.154 - 8.201: 98.4077% ( 2) 00:17:25.268 8.201 - 8.249: 98.4152% ( 1) 00:17:25.268 8.249 - 8.296: 98.4227% ( 1) 00:17:25.268 8.296 - 8.344: 98.4376% ( 2) 00:17:25.268 8.344 - 8.391: 98.4451% ( 1) 00:17:25.268 8.439 - 8.486: 98.4750% ( 4) 00:17:25.268 8.533 - 8.581: 98.4825% ( 1) 00:17:25.268 8.581 - 8.628: 98.4899% ( 1) 00:17:25.268 8.676 - 8.723: 98.4974% ( 1) 00:17:25.268 8.723 - 8.770: 98.5049% ( 1) 00:17:25.268 8.818 - 8.865: 98.5124% ( 1) 00:17:25.268 8.865 - 8.913: 98.5198% ( 1) 00:17:25.268 8.960 - 9.007: 98.5423% ( 3) 00:17:25.268 9.007 - 9.055: 98.5572% ( 2) 00:17:25.268 9.102 - 9.150: 98.5647% ( 1) 00:17:25.268 9.197 - 9.244: 98.5722% ( 1) 00:17:25.268 9.387 - 9.434: 98.5797% ( 1) 00:17:25.268 9.481 - 9.529: 98.5871% ( 1) 00:17:25.268 9.576 - 9.624: 98.5946% ( 1) 00:17:25.268 9.624 - 9.671: 98.6021% ( 1) 00:17:25.268 9.671 - 9.719: 98.6096% ( 1) 00:17:25.268 9.719 - 9.766: 98.6245% ( 2) 00:17:25.268 9.861 - 9.908: 98.6395% ( 2) 00:17:25.268 9.956 - 10.003: 98.6469% ( 1) 00:17:25.268 10.003 - 10.050: 98.6694% ( 3) 00:17:25.268 10.050 - 10.098: 98.6768% ( 1) 00:17:25.268 10.145 - 10.193: 98.6843% ( 1) 00:17:25.268 10.287 - 10.335: 98.6918% ( 1) 00:17:25.268 10.524 - 10.572: 98.7067% ( 2) 00:17:25.268 10.619 - 10.667: 98.7366% ( 4) 00:17:25.268 10.714 - 10.761: 98.7441% ( 1) 00:17:25.268 10.856 - 10.904: 98.7516% ( 1) 00:17:25.268 10.904 - 10.951: 98.7591% ( 1) 00:17:25.268 10.951 - 10.999: 98.7665% ( 1) 00:17:25.268 11.141 - 11.188: 98.7740% ( 1) 00:17:25.268 11.330 - 11.378: 98.7815% ( 1) 00:17:25.268 11.520 - 11.567: 98.7890% ( 1) 00:17:25.269 11.567 - 11.615: 98.7964% ( 1) 00:17:25.269 11.662 - 11.710: 98.8039% ( 1) 00:17:25.269 11.710 - 11.757: 98.8189% ( 2) 00:17:25.269 11.804 - 11.852: 98.8263% ( 1) 00:17:25.269 11.852 - 11.899: 98.8338% ( 1) 00:17:25.269 11.947 - 11.994: 98.8413% ( 1) 00:17:25.269 12.041 - 12.089: 98.8488% ( 1) 00:17:25.269 12.136 - 12.231: 98.8712% ( 3) 00:17:25.269 12.231 - 12.326: 98.8861% ( 2) 00:17:25.269 12.326 - 12.421: 98.9011% ( 2) 00:17:25.269 12.421 - 12.516: 98.9086% ( 1) 00:17:25.269 12.516 - 12.610: 98.9160% ( 1) 00:17:25.269 12.895 - 12.990: 98.9310% ( 2) 00:17:25.269 12.990 - 13.084: 98.9385% ( 1) 00:17:25.269 13.084 - 13.179: 98.9609% ( 3) 00:17:25.269 13.179 - 13.274: 98.9684% ( 1) 00:17:25.269 13.369 - 13.464: 98.9759% ( 1) 00:17:25.269 13.748 - 13.843: 98.9908% ( 2) 00:17:25.269 13.843 - 13.938: 98.9983% ( 1) 00:17:25.269 13.938 - 14.033: 99.0132% ( 2) 00:17:25.269 14.033 - 14.127: 99.0282% ( 2) 00:17:25.269 14.127 - 14.222: 99.0506% ( 3) 00:17:25.269 14.222 - 14.317: 99.0581% ( 1) 00:17:25.269 14.317 - 14.412: 99.0656% ( 1) 00:17:25.269 14.412 - 14.507: 99.0955% ( 4) 00:17:25.269 14.601 - 14.696: 99.1029% ( 1) 00:17:25.269 14.696 - 14.791: 99.1254% ( 3) 00:17:25.269 14.886 - 14.981: 99.1328% ( 1) 00:17:25.269 16.877 - 16.972: 99.1403% ( 1) 00:17:25.269 17.067 - 17.161: 99.1553% ( 2) 00:17:25.269 17.256 - 17.351: 99.1627% ( 1) 00:17:25.269 17.351 - 17.446: 99.1777% ( 2) 00:17:25.269 17.446 - 17.541: 99.1926% ( 2) 00:17:25.269 17.541 - 17.636: 99.2375% ( 6) 00:17:25.269 17.636 - 17.730: 99.2749% ( 5) 00:17:25.269 17.730 - 17.825: 99.3123% ( 5) 00:17:25.269 17.825 - 17.920: 99.3422% ( 4) 00:17:25.269 17.920 - 18.015: 99.3945% ( 7) 00:17:25.269 18.015 - 18.110: 99.4468% ( 7) 00:17:25.269 18.110 - 18.204: 99.4767% ( 4) 00:17:25.269 18.204 - 18.299: 99.5141% ( 5) 00:17:25.269 18.299 - 18.394: 99.5365% ( 3) 00:17:25.269 18.394 - 18.489: 99.6187% ( 11) 00:17:25.269 18.489 - 18.584: 99.6561% ( 5) 00:17:25.269 18.584 - 18.679: 99.7010% ( 6) 00:17:25.269 18.679 - 18.773: 99.7234% ( 3) 00:17:25.269 18.773 - 18.868: 99.7384% ( 2) 00:17:25.269 18.963 - 19.058: 99.7533% ( 2) 00:17:25.269 19.058 - 19.153: 99.7757% ( 3) 00:17:25.269 19.153 - 19.247: 99.7832% ( 1) 00:17:25.269 19.342 - 19.437: 99.7982% ( 2) 00:17:25.269 19.627 - 19.721: 99.8206% ( 3) 00:17:25.269 19.721 - 19.816: 99.8281% ( 1) 00:17:25.269 20.101 - 20.196: 99.8355% ( 1) 00:17:25.269 20.385 - 20.480: 99.8430% ( 1) 00:17:25.269 20.575 - 20.670: 99.8505% ( 1) 00:17:25.269 21.144 - 21.239: 99.8580% ( 1) 00:17:25.269 21.333 - 21.428: 99.8654% ( 1) 00:17:25.269 22.281 - 22.376: 99.8729% ( 1) 00:17:25.269 26.359 - 26.548: 99.8804% ( 1) 00:17:25.269 26.927 - 27.117: 99.8953% ( 2) 00:17:25.269 28.444 - 28.634: 99.9103% ( 2) 00:17:25.269 34.513 - 34.702: 99.9178% ( 1) 00:17:25.269 3980.705 - 4004.978: 99.9701% ( 7) 00:17:25.269 4004.978 - 4029.250: 100.0000% ( 4) 00:17:25.269 00:17:25.269 Complete histogram 00:17:25.269 ================== 00:17:25.269 Range in us Cumulative Count 00:17:25.269 2.062 - 2.074: 0.0673% ( 9) 00:17:25.269 2.074 - 2.086: 16.7003% ( 2225) 00:17:25.269 2.086 - 2.098: 46.7594% ( 4021) 00:17:25.269 2.098 - 2.110: 50.6167% ( 516) 00:17:25.269 2.110 - 2.121: 57.9203% ( 977) 00:17:25.269 2.121 - 2.133: 62.7196% ( 642) 00:17:25.269 2.133 - 2.145: 65.0594% ( 313) 00:17:25.269 2.145 - 2.157: 73.7535% ( 1163) 00:17:25.269 2.157 - 2.169: 80.8402% ( 948) 00:17:25.269 2.169 - 2.181: 82.4101% ( 210) 00:17:25.269 2.181 - 2.193: 86.3123% ( 522) 00:17:25.269 2.193 - 2.204: 88.8839% ( 344) 00:17:25.269 2.204 - 2.216: 89.5567% ( 90) 00:17:25.269 2.216 - 2.228: 91.0967% ( 206) 00:17:25.269 2.228 - 2.240: 93.5636% ( 330) 00:17:25.269 2.240 - 2.252: 94.4233% ( 115) 00:17:25.269 2.252 - 2.264: 94.8269% ( 54) 00:17:25.269 2.264 - 2.276: 95.0512% ( 30) 00:17:25.269 2.276 - 2.287: 95.1708% ( 16) 00:17:25.269 2.287 - 2.299: 95.3652% ( 26) 00:17:25.269 2.299 - 2.311: 95.7763% ( 55) 00:17:25.269 2.311 - 2.323: 95.9707% ( 26) 00:17:25.269 2.323 - 2.335: 96.0230% ( 7) 00:17:25.269 2.335 - 2.347: 96.0754% ( 7) 00:17:25.269 2.347 - 2.359: 96.0978% ( 3) 00:17:25.269 2.359 - 2.370: 96.1651% ( 9) 00:17:25.269 2.370 - 2.382: 96.2697% ( 14) 00:17:25.269 2.382 - 2.394: 96.4865% ( 29) 00:17:25.269 2.394 - 2.406: 96.6883% ( 27) 00:17:25.269 2.406 - 2.418: 96.8977% ( 28) 00:17:25.269 2.418 - 2.430: 97.0845% ( 25) 00:17:25.269 2.430 - 2.441: 97.3537% ( 36) 00:17:25.269 2.441 - 2.453: 97.6975% ( 46) 00:17:25.269 2.453 - 2.465: 97.9218% ( 30) 00:17:25.269 2.465 - 2.477: 98.0638% ( 19) 00:17:25.269 2.477 - 2.489: 98.1461% ( 11) 00:17:25.269 2.489 - 2.501: 98.2507% ( 14) 00:17:25.269 2.501 - 2.513: 98.3105% ( 8) 00:17:25.269 2.513 - 2.524: 98.3554% ( 6) 00:17:25.269 2.524 - 2.536: 98.4077% ( 7) 00:17:25.269 2.536 - 2.548: 9[2024-07-24 09:03:02.992785] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.269 8.4376% ( 4) 00:17:25.269 2.548 - 2.560: 98.4451% ( 1) 00:17:25.269 2.560 - 2.572: 98.4600% ( 2) 00:17:25.269 2.572 - 2.584: 98.4750% ( 2) 00:17:25.269 2.584 - 2.596: 98.4825% ( 1) 00:17:25.269 2.667 - 2.679: 98.4899% ( 1) 00:17:25.269 2.679 - 2.690: 98.4974% ( 1) 00:17:25.269 2.738 - 2.750: 98.5049% ( 1) 00:17:25.269 2.773 - 2.785: 98.5124% ( 1) 00:17:25.269 2.951 - 2.963: 98.5198% ( 1) 00:17:25.269 3.129 - 3.153: 98.5348% ( 2) 00:17:25.269 3.224 - 3.247: 98.5423% ( 1) 00:17:25.269 3.247 - 3.271: 98.5497% ( 1) 00:17:25.269 3.271 - 3.295: 98.5572% ( 1) 00:17:25.269 3.295 - 3.319: 98.5647% ( 1) 00:17:25.269 3.319 - 3.342: 98.5871% ( 3) 00:17:25.269 3.342 - 3.366: 98.6096% ( 3) 00:17:25.269 3.366 - 3.390: 98.6395% ( 4) 00:17:25.269 3.390 - 3.413: 98.6469% ( 1) 00:17:25.269 3.484 - 3.508: 98.6544% ( 1) 00:17:25.269 3.508 - 3.532: 98.6619% ( 1) 00:17:25.269 3.556 - 3.579: 98.6694% ( 1) 00:17:25.269 3.603 - 3.627: 98.6768% ( 1) 00:17:25.269 3.674 - 3.698: 98.6843% ( 1) 00:17:25.269 3.982 - 4.006: 98.6918% ( 1) 00:17:25.269 4.053 - 4.077: 98.6993% ( 1) 00:17:25.269 5.594 - 5.618: 98.7067% ( 1) 00:17:25.269 5.736 - 5.760: 98.7217% ( 2) 00:17:25.269 5.760 - 5.784: 98.7292% ( 1) 00:17:25.269 5.902 - 5.926: 98.7366% ( 1) 00:17:25.269 5.997 - 6.021: 98.7441% ( 1) 00:17:25.269 6.068 - 6.116: 98.7516% ( 1) 00:17:25.269 6.258 - 6.305: 98.7591% ( 1) 00:17:25.269 6.542 - 6.590: 98.7665% ( 1) 00:17:25.269 6.637 - 6.684: 98.7740% ( 1) 00:17:25.269 6.874 - 6.921: 98.7815% ( 1) 00:17:25.269 6.921 - 6.969: 98.7890% ( 1) 00:17:25.269 7.348 - 7.396: 98.7964% ( 1) 00:17:25.269 7.538 - 7.585: 98.8039% ( 1) 00:17:25.269 7.727 - 7.775: 98.8114% ( 1) 00:17:25.269 7.775 - 7.822: 98.8189% ( 1) 00:17:25.269 7.917 - 7.964: 98.8263% ( 1) 00:17:25.269 8.770 - 8.818: 98.8338% ( 1) 00:17:25.269 9.481 - 9.529: 98.8413% ( 1) 00:17:25.269 11.141 - 11.188: 98.8488% ( 1) 00:17:25.269 15.455 - 15.550: 98.8562% ( 1) 00:17:25.269 15.644 - 15.739: 98.8712% ( 2) 00:17:25.269 15.739 - 15.834: 98.9011% ( 4) 00:17:25.269 15.834 - 15.929: 98.9235% ( 3) 00:17:25.269 15.929 - 16.024: 98.9310% ( 1) 00:17:25.269 16.024 - 16.119: 98.9460% ( 2) 00:17:25.269 16.119 - 16.213: 98.9684% ( 3) 00:17:25.269 16.213 - 16.308: 99.0132% ( 6) 00:17:25.269 16.308 - 16.403: 99.0805% ( 9) 00:17:25.269 16.403 - 16.498: 99.1179% ( 5) 00:17:25.269 16.498 - 16.593: 99.1328% ( 2) 00:17:25.269 16.593 - 16.687: 99.1553% ( 3) 00:17:25.269 16.687 - 16.782: 99.1852% ( 4) 00:17:25.269 16.782 - 16.877: 99.2375% ( 7) 00:17:25.269 16.877 - 16.972: 99.2599% ( 3) 00:17:25.269 16.972 - 17.067: 99.2749% ( 2) 00:17:25.269 17.067 - 17.161: 99.2824% ( 1) 00:17:25.269 17.256 - 17.351: 99.3048% ( 3) 00:17:25.269 17.351 - 17.446: 99.3347% ( 4) 00:17:25.269 17.446 - 17.541: 99.3496% ( 2) 00:17:25.269 17.730 - 17.825: 99.3721% ( 3) 00:17:25.269 17.825 - 17.920: 99.3870% ( 2) 00:17:25.269 17.920 - 18.015: 99.3945% ( 1) 00:17:25.269 18.015 - 18.110: 99.4020% ( 1) 00:17:25.269 18.110 - 18.204: 99.4094% ( 1) 00:17:25.270 18.204 - 18.299: 99.4169% ( 1) 00:17:25.270 27.876 - 28.065: 99.4244% ( 1) 00:17:25.270 3980.705 - 4004.978: 99.7907% ( 49) 00:17:25.270 4004.978 - 4029.250: 100.0000% ( 28) 00:17:25.270 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.270 [ 00:17:25.270 { 00:17:25.270 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:25.270 "subtype": "Discovery", 00:17:25.270 "listen_addresses": [], 00:17:25.270 "allow_any_host": true, 00:17:25.270 "hosts": [] 00:17:25.270 }, 00:17:25.270 { 00:17:25.270 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:25.270 "subtype": "NVMe", 00:17:25.270 "listen_addresses": [ 00:17:25.270 { 00:17:25.270 "trtype": "VFIOUSER", 00:17:25.270 "adrfam": "IPv4", 00:17:25.270 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:25.270 "trsvcid": "0" 00:17:25.270 } 00:17:25.270 ], 00:17:25.270 "allow_any_host": true, 00:17:25.270 "hosts": [], 00:17:25.270 "serial_number": "SPDK1", 00:17:25.270 "model_number": "SPDK bdev Controller", 00:17:25.270 "max_namespaces": 32, 00:17:25.270 "min_cntlid": 1, 00:17:25.270 "max_cntlid": 65519, 00:17:25.270 "namespaces": [ 00:17:25.270 { 00:17:25.270 "nsid": 1, 00:17:25.270 "bdev_name": "Malloc1", 00:17:25.270 "name": "Malloc1", 00:17:25.270 "nguid": "29B1420C7D88490586F17B1FD1502216", 00:17:25.270 "uuid": "29b1420c-7d88-4905-86f1-7b1fd1502216" 00:17:25.270 } 00:17:25.270 ] 00:17:25.270 }, 00:17:25.270 { 00:17:25.270 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:25.270 "subtype": "NVMe", 00:17:25.270 "listen_addresses": [ 00:17:25.270 { 00:17:25.270 "trtype": "VFIOUSER", 00:17:25.270 "adrfam": "IPv4", 00:17:25.270 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:25.270 "trsvcid": "0" 00:17:25.270 } 00:17:25.270 ], 00:17:25.270 "allow_any_host": true, 00:17:25.270 "hosts": [], 00:17:25.270 "serial_number": "SPDK2", 00:17:25.270 "model_number": "SPDK bdev Controller", 00:17:25.270 "max_namespaces": 32, 00:17:25.270 "min_cntlid": 1, 00:17:25.270 "max_cntlid": 65519, 00:17:25.270 "namespaces": [ 00:17:25.270 { 00:17:25.270 "nsid": 1, 00:17:25.270 "bdev_name": "Malloc2", 00:17:25.270 "name": "Malloc2", 00:17:25.270 "nguid": "42E956C1138F4AADB35CED8806229586", 00:17:25.270 "uuid": "42e956c1-138f-4aad-b35c-ed8806229586" 00:17:25.270 } 00:17:25.270 ] 00:17:25.270 } 00:17:25.270 ] 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3766210 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:25.270 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:17:25.270 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.527 [2024-07-24 09:03:03.464523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:17:25.527 Malloc3 00:17:25.527 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:17:25.784 [2024-07-24 09:03:03.842184] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:17:25.784 09:03:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:25.784 Asynchronous Event Request test 00:17:25.784 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.784 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:17:25.784 Registering asynchronous event callbacks... 00:17:25.784 Starting namespace attribute notice tests for all controllers... 00:17:25.784 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:25.784 aer_cb - Changed Namespace 00:17:25.784 Cleaning up... 00:17:26.043 [ 00:17:26.043 { 00:17:26.043 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:26.043 "subtype": "Discovery", 00:17:26.043 "listen_addresses": [], 00:17:26.043 "allow_any_host": true, 00:17:26.043 "hosts": [] 00:17:26.043 }, 00:17:26.043 { 00:17:26.043 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:26.043 "subtype": "NVMe", 00:17:26.043 "listen_addresses": [ 00:17:26.043 { 00:17:26.043 "trtype": "VFIOUSER", 00:17:26.043 "adrfam": "IPv4", 00:17:26.043 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:26.043 "trsvcid": "0" 00:17:26.043 } 00:17:26.043 ], 00:17:26.043 "allow_any_host": true, 00:17:26.043 "hosts": [], 00:17:26.043 "serial_number": "SPDK1", 00:17:26.043 "model_number": "SPDK bdev Controller", 00:17:26.043 "max_namespaces": 32, 00:17:26.043 "min_cntlid": 1, 00:17:26.043 "max_cntlid": 65519, 00:17:26.043 "namespaces": [ 00:17:26.043 { 00:17:26.043 "nsid": 1, 00:17:26.043 "bdev_name": "Malloc1", 00:17:26.043 "name": "Malloc1", 00:17:26.043 "nguid": "29B1420C7D88490586F17B1FD1502216", 00:17:26.043 "uuid": "29b1420c-7d88-4905-86f1-7b1fd1502216" 00:17:26.043 }, 00:17:26.043 { 00:17:26.043 "nsid": 2, 00:17:26.043 "bdev_name": "Malloc3", 00:17:26.043 "name": "Malloc3", 00:17:26.043 "nguid": "09C159177DB1463DB22A0B87221759DF", 00:17:26.043 "uuid": "09c15917-7db1-463d-b22a-0b87221759df" 00:17:26.043 } 00:17:26.043 ] 00:17:26.043 }, 00:17:26.043 { 00:17:26.043 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:26.043 "subtype": "NVMe", 00:17:26.043 "listen_addresses": [ 00:17:26.043 { 00:17:26.043 "trtype": "VFIOUSER", 00:17:26.043 "adrfam": "IPv4", 00:17:26.043 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:26.043 "trsvcid": "0" 00:17:26.043 } 00:17:26.043 ], 00:17:26.043 "allow_any_host": true, 00:17:26.043 "hosts": [], 00:17:26.043 "serial_number": "SPDK2", 00:17:26.043 "model_number": "SPDK bdev Controller", 00:17:26.043 "max_namespaces": 32, 00:17:26.043 "min_cntlid": 1, 00:17:26.043 "max_cntlid": 65519, 00:17:26.043 "namespaces": [ 00:17:26.043 { 00:17:26.043 "nsid": 1, 00:17:26.043 "bdev_name": "Malloc2", 00:17:26.043 "name": "Malloc2", 00:17:26.043 "nguid": "42E956C1138F4AADB35CED8806229586", 00:17:26.043 "uuid": "42e956c1-138f-4aad-b35c-ed8806229586" 00:17:26.043 } 00:17:26.043 ] 00:17:26.043 } 00:17:26.043 ] 00:17:26.043 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3766210 00:17:26.043 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:26.043 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:26.043 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:17:26.043 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:17:26.043 [2024-07-24 09:03:04.126417] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:17:26.043 [2024-07-24 09:03:04.126461] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3766244 ] 00:17:26.043 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.044 [2024-07-24 09:03:04.142667] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:26.304 [2024-07-24 09:03:04.160360] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:17:26.304 [2024-07-24 09:03:04.167423] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:26.304 [2024-07-24 09:03:04.167456] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6bbf073000 00:17:26.304 [2024-07-24 09:03:04.171112] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.171445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.172454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.173463] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.174470] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.175495] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.176501] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.177496] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:17:26.304 [2024-07-24 09:03:04.178503] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:17:26.304 [2024-07-24 09:03:04.178525] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6bbde35000 00:17:26.304 [2024-07-24 09:03:04.179640] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.304 [2024-07-24 09:03:04.194425] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:17:26.304 [2024-07-24 09:03:04.194461] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:17:26.304 [2024-07-24 09:03:04.196553] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:26.304 [2024-07-24 09:03:04.196607] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:17:26.304 [2024-07-24 09:03:04.196695] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:17:26.304 [2024-07-24 09:03:04.196717] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:17:26.304 [2024-07-24 09:03:04.196727] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:17:26.304 [2024-07-24 09:03:04.197556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:17:26.304 [2024-07-24 09:03:04.197582] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:17:26.304 [2024-07-24 09:03:04.197596] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:17:26.304 [2024-07-24 09:03:04.198568] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:17:26.304 [2024-07-24 09:03:04.198589] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:17:26.304 [2024-07-24 09:03:04.198603] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:17:26.304 [2024-07-24 09:03:04.199570] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:17:26.304 [2024-07-24 09:03:04.199591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:26.304 [2024-07-24 09:03:04.200583] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:17:26.304 [2024-07-24 09:03:04.200604] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:17:26.304 [2024-07-24 09:03:04.200614] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:17:26.304 [2024-07-24 09:03:04.200625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:26.304 [2024-07-24 09:03:04.200735] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:17:26.304 [2024-07-24 09:03:04.200743] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:26.304 [2024-07-24 09:03:04.200752] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:17:26.304 [2024-07-24 09:03:04.205112] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:17:26.305 [2024-07-24 09:03:04.205632] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:17:26.305 [2024-07-24 09:03:04.206636] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:26.305 [2024-07-24 09:03:04.207635] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:26.305 [2024-07-24 09:03:04.207714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:26.305 [2024-07-24 09:03:04.208652] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:17:26.305 [2024-07-24 09:03:04.208672] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:26.305 [2024-07-24 09:03:04.208682] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.208705] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:17:26.305 [2024-07-24 09:03:04.208719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.208741] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.305 [2024-07-24 09:03:04.208751] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.305 [2024-07-24 09:03:04.208758] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.305 [2024-07-24 09:03:04.208776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.215120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.215155] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:17:26.305 [2024-07-24 09:03:04.215164] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:17:26.305 [2024-07-24 09:03:04.215172] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:17:26.305 [2024-07-24 09:03:04.215181] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:17:26.305 [2024-07-24 09:03:04.215189] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:17:26.305 [2024-07-24 09:03:04.215197] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:17:26.305 [2024-07-24 09:03:04.215205] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.215219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.215239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.223115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.223152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.305 [2024-07-24 09:03:04.223170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.305 [2024-07-24 09:03:04.223183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.305 [2024-07-24 09:03:04.223195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:17:26.305 [2024-07-24 09:03:04.223204] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.223220] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.223235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.231111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.231130] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:17:26.305 [2024-07-24 09:03:04.231139] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.231156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.231168] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.231182] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.239120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.239194] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.239210] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.239223] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:17:26.305 [2024-07-24 09:03:04.239232] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:17:26.305 [2024-07-24 09:03:04.239238] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.305 [2024-07-24 09:03:04.239248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.247117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.247140] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:17:26.305 [2024-07-24 09:03:04.247157] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.247172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.247185] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.305 [2024-07-24 09:03:04.247194] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.305 [2024-07-24 09:03:04.247204] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.305 [2024-07-24 09:03:04.247214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.255115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.255143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.255160] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.255173] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:17:26.305 [2024-07-24 09:03:04.255182] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.305 [2024-07-24 09:03:04.255188] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.305 [2024-07-24 09:03:04.255197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.262114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.262146] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262188] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262198] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262206] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262215] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:17:26.305 [2024-07-24 09:03:04.262223] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:17:26.305 [2024-07-24 09:03:04.262231] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:17:26.305 [2024-07-24 09:03:04.262256] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.270117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.270145] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.278115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.278141] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.286113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.286147] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:17:26.305 [2024-07-24 09:03:04.294112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:17:26.305 [2024-07-24 09:03:04.294143] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:17:26.305 [2024-07-24 09:03:04.294155] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:17:26.305 [2024-07-24 09:03:04.294161] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:17:26.306 [2024-07-24 09:03:04.294167] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:17:26.306 [2024-07-24 09:03:04.294174] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:17:26.306 [2024-07-24 09:03:04.294183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:17:26.306 [2024-07-24 09:03:04.294195] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:17:26.306 [2024-07-24 09:03:04.294204] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:17:26.306 [2024-07-24 09:03:04.294210] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.306 [2024-07-24 09:03:04.294219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:17:26.306 [2024-07-24 09:03:04.294230] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:17:26.306 [2024-07-24 09:03:04.294238] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:17:26.306 [2024-07-24 09:03:04.294244] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.306 [2024-07-24 09:03:04.294253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:17:26.306 [2024-07-24 09:03:04.294265] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:17:26.306 [2024-07-24 09:03:04.294273] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:17:26.306 [2024-07-24 09:03:04.294279] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:17:26.306 [2024-07-24 09:03:04.294288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:17:26.306 [2024-07-24 09:03:04.302112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:17:26.306 [2024-07-24 09:03:04.302140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:17:26.306 [2024-07-24 09:03:04.302158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:17:26.306 [2024-07-24 09:03:04.302171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:17:26.306 ===================================================== 00:17:26.306 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:26.306 ===================================================== 00:17:26.306 Controller Capabilities/Features 00:17:26.306 ================================ 00:17:26.306 Vendor ID: 4e58 00:17:26.306 Subsystem Vendor ID: 4e58 00:17:26.306 Serial Number: SPDK2 00:17:26.306 Model Number: SPDK bdev Controller 00:17:26.306 Firmware Version: 24.09 00:17:26.306 Recommended Arb Burst: 6 00:17:26.306 IEEE OUI Identifier: 8d 6b 50 00:17:26.306 Multi-path I/O 00:17:26.306 May have multiple subsystem ports: Yes 00:17:26.306 May have multiple controllers: Yes 00:17:26.306 Associated with SR-IOV VF: No 00:17:26.306 Max Data Transfer Size: 131072 00:17:26.306 Max Number of Namespaces: 32 00:17:26.306 Max Number of I/O Queues: 127 00:17:26.306 NVMe Specification Version (VS): 1.3 00:17:26.306 NVMe Specification Version (Identify): 1.3 00:17:26.306 Maximum Queue Entries: 256 00:17:26.306 Contiguous Queues Required: Yes 00:17:26.306 Arbitration Mechanisms Supported 00:17:26.306 Weighted Round Robin: Not Supported 00:17:26.306 Vendor Specific: Not Supported 00:17:26.306 Reset Timeout: 15000 ms 00:17:26.306 Doorbell Stride: 4 bytes 00:17:26.306 NVM Subsystem Reset: Not Supported 00:17:26.306 Command Sets Supported 00:17:26.306 NVM Command Set: Supported 00:17:26.306 Boot Partition: Not Supported 00:17:26.306 Memory Page Size Minimum: 4096 bytes 00:17:26.306 Memory Page Size Maximum: 4096 bytes 00:17:26.306 Persistent Memory Region: Not Supported 00:17:26.306 Optional Asynchronous Events Supported 00:17:26.306 Namespace Attribute Notices: Supported 00:17:26.306 Firmware Activation Notices: Not Supported 00:17:26.306 ANA Change Notices: Not Supported 00:17:26.306 PLE Aggregate Log Change Notices: Not Supported 00:17:26.306 LBA Status Info Alert Notices: Not Supported 00:17:26.306 EGE Aggregate Log Change Notices: Not Supported 00:17:26.306 Normal NVM Subsystem Shutdown event: Not Supported 00:17:26.306 Zone Descriptor Change Notices: Not Supported 00:17:26.306 Discovery Log Change Notices: Not Supported 00:17:26.306 Controller Attributes 00:17:26.306 128-bit Host Identifier: Supported 00:17:26.306 Non-Operational Permissive Mode: Not Supported 00:17:26.306 NVM Sets: Not Supported 00:17:26.306 Read Recovery Levels: Not Supported 00:17:26.306 Endurance Groups: Not Supported 00:17:26.306 Predictable Latency Mode: Not Supported 00:17:26.306 Traffic Based Keep ALive: Not Supported 00:17:26.306 Namespace Granularity: Not Supported 00:17:26.306 SQ Associations: Not Supported 00:17:26.306 UUID List: Not Supported 00:17:26.306 Multi-Domain Subsystem: Not Supported 00:17:26.306 Fixed Capacity Management: Not Supported 00:17:26.306 Variable Capacity Management: Not Supported 00:17:26.306 Delete Endurance Group: Not Supported 00:17:26.306 Delete NVM Set: Not Supported 00:17:26.306 Extended LBA Formats Supported: Not Supported 00:17:26.306 Flexible Data Placement Supported: Not Supported 00:17:26.306 00:17:26.306 Controller Memory Buffer Support 00:17:26.306 ================================ 00:17:26.306 Supported: No 00:17:26.306 00:17:26.306 Persistent Memory Region Support 00:17:26.306 ================================ 00:17:26.306 Supported: No 00:17:26.306 00:17:26.306 Admin Command Set Attributes 00:17:26.306 ============================ 00:17:26.306 Security Send/Receive: Not Supported 00:17:26.306 Format NVM: Not Supported 00:17:26.306 Firmware Activate/Download: Not Supported 00:17:26.306 Namespace Management: Not Supported 00:17:26.306 Device Self-Test: Not Supported 00:17:26.306 Directives: Not Supported 00:17:26.306 NVMe-MI: Not Supported 00:17:26.306 Virtualization Management: Not Supported 00:17:26.306 Doorbell Buffer Config: Not Supported 00:17:26.306 Get LBA Status Capability: Not Supported 00:17:26.306 Command & Feature Lockdown Capability: Not Supported 00:17:26.306 Abort Command Limit: 4 00:17:26.306 Async Event Request Limit: 4 00:17:26.306 Number of Firmware Slots: N/A 00:17:26.306 Firmware Slot 1 Read-Only: N/A 00:17:26.306 Firmware Activation Without Reset: N/A 00:17:26.306 Multiple Update Detection Support: N/A 00:17:26.306 Firmware Update Granularity: No Information Provided 00:17:26.306 Per-Namespace SMART Log: No 00:17:26.306 Asymmetric Namespace Access Log Page: Not Supported 00:17:26.306 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:17:26.306 Command Effects Log Page: Supported 00:17:26.306 Get Log Page Extended Data: Supported 00:17:26.306 Telemetry Log Pages: Not Supported 00:17:26.306 Persistent Event Log Pages: Not Supported 00:17:26.306 Supported Log Pages Log Page: May Support 00:17:26.306 Commands Supported & Effects Log Page: Not Supported 00:17:26.306 Feature Identifiers & Effects Log Page:May Support 00:17:26.306 NVMe-MI Commands & Effects Log Page: May Support 00:17:26.306 Data Area 4 for Telemetry Log: Not Supported 00:17:26.306 Error Log Page Entries Supported: 128 00:17:26.306 Keep Alive: Supported 00:17:26.306 Keep Alive Granularity: 10000 ms 00:17:26.306 00:17:26.306 NVM Command Set Attributes 00:17:26.306 ========================== 00:17:26.306 Submission Queue Entry Size 00:17:26.306 Max: 64 00:17:26.306 Min: 64 00:17:26.306 Completion Queue Entry Size 00:17:26.306 Max: 16 00:17:26.306 Min: 16 00:17:26.306 Number of Namespaces: 32 00:17:26.306 Compare Command: Supported 00:17:26.306 Write Uncorrectable Command: Not Supported 00:17:26.306 Dataset Management Command: Supported 00:17:26.306 Write Zeroes Command: Supported 00:17:26.306 Set Features Save Field: Not Supported 00:17:26.306 Reservations: Not Supported 00:17:26.306 Timestamp: Not Supported 00:17:26.306 Copy: Supported 00:17:26.306 Volatile Write Cache: Present 00:17:26.306 Atomic Write Unit (Normal): 1 00:17:26.306 Atomic Write Unit (PFail): 1 00:17:26.306 Atomic Compare & Write Unit: 1 00:17:26.306 Fused Compare & Write: Supported 00:17:26.306 Scatter-Gather List 00:17:26.306 SGL Command Set: Supported (Dword aligned) 00:17:26.306 SGL Keyed: Not Supported 00:17:26.306 SGL Bit Bucket Descriptor: Not Supported 00:17:26.306 SGL Metadata Pointer: Not Supported 00:17:26.306 Oversized SGL: Not Supported 00:17:26.306 SGL Metadata Address: Not Supported 00:17:26.306 SGL Offset: Not Supported 00:17:26.306 Transport SGL Data Block: Not Supported 00:17:26.306 Replay Protected Memory Block: Not Supported 00:17:26.306 00:17:26.306 Firmware Slot Information 00:17:26.307 ========================= 00:17:26.307 Active slot: 1 00:17:26.307 Slot 1 Firmware Revision: 24.09 00:17:26.307 00:17:26.307 00:17:26.307 Commands Supported and Effects 00:17:26.307 ============================== 00:17:26.307 Admin Commands 00:17:26.307 -------------- 00:17:26.307 Get Log Page (02h): Supported 00:17:26.307 Identify (06h): Supported 00:17:26.307 Abort (08h): Supported 00:17:26.307 Set Features (09h): Supported 00:17:26.307 Get Features (0Ah): Supported 00:17:26.307 Asynchronous Event Request (0Ch): Supported 00:17:26.307 Keep Alive (18h): Supported 00:17:26.307 I/O Commands 00:17:26.307 ------------ 00:17:26.307 Flush (00h): Supported LBA-Change 00:17:26.307 Write (01h): Supported LBA-Change 00:17:26.307 Read (02h): Supported 00:17:26.307 Compare (05h): Supported 00:17:26.307 Write Zeroes (08h): Supported LBA-Change 00:17:26.307 Dataset Management (09h): Supported LBA-Change 00:17:26.307 Copy (19h): Supported LBA-Change 00:17:26.307 00:17:26.307 Error Log 00:17:26.307 ========= 00:17:26.307 00:17:26.307 Arbitration 00:17:26.307 =========== 00:17:26.307 Arbitration Burst: 1 00:17:26.307 00:17:26.307 Power Management 00:17:26.307 ================ 00:17:26.307 Number of Power States: 1 00:17:26.307 Current Power State: Power State #0 00:17:26.307 Power State #0: 00:17:26.307 Max Power: 0.00 W 00:17:26.307 Non-Operational State: Operational 00:17:26.307 Entry Latency: Not Reported 00:17:26.307 Exit Latency: Not Reported 00:17:26.307 Relative Read Throughput: 0 00:17:26.307 Relative Read Latency: 0 00:17:26.307 Relative Write Throughput: 0 00:17:26.307 Relative Write Latency: 0 00:17:26.307 Idle Power: Not Reported 00:17:26.307 Active Power: Not Reported 00:17:26.307 Non-Operational Permissive Mode: Not Supported 00:17:26.307 00:17:26.307 Health Information 00:17:26.307 ================== 00:17:26.307 Critical Warnings: 00:17:26.307 Available Spare Space: OK 00:17:26.307 Temperature: OK 00:17:26.307 Device Reliability: OK 00:17:26.307 Read Only: No 00:17:26.307 Volatile Memory Backup: OK 00:17:26.307 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:26.307 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:26.307 Available Spare: 0% 00:17:26.307 Available Sp[2024-07-24 09:03:04.302290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:17:26.307 [2024-07-24 09:03:04.310128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:17:26.307 [2024-07-24 09:03:04.310182] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:17:26.307 [2024-07-24 09:03:04.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.307 [2024-07-24 09:03:04.310211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.307 [2024-07-24 09:03:04.310225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.307 [2024-07-24 09:03:04.310235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:26.307 [2024-07-24 09:03:04.310298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:17:26.307 [2024-07-24 09:03:04.310318] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:17:26.307 [2024-07-24 09:03:04.311300] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:26.307 [2024-07-24 09:03:04.311386] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:17:26.307 [2024-07-24 09:03:04.311405] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:17:26.307 [2024-07-24 09:03:04.312307] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:17:26.307 [2024-07-24 09:03:04.312332] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:17:26.307 [2024-07-24 09:03:04.312385] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:17:26.307 [2024-07-24 09:03:04.313614] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:17:26.307 are Threshold: 0% 00:17:26.307 Life Percentage Used: 0% 00:17:26.307 Data Units Read: 0 00:17:26.307 Data Units Written: 0 00:17:26.307 Host Read Commands: 0 00:17:26.307 Host Write Commands: 0 00:17:26.307 Controller Busy Time: 0 minutes 00:17:26.307 Power Cycles: 0 00:17:26.307 Power On Hours: 0 hours 00:17:26.307 Unsafe Shutdowns: 0 00:17:26.307 Unrecoverable Media Errors: 0 00:17:26.307 Lifetime Error Log Entries: 0 00:17:26.307 Warning Temperature Time: 0 minutes 00:17:26.307 Critical Temperature Time: 0 minutes 00:17:26.307 00:17:26.307 Number of Queues 00:17:26.307 ================ 00:17:26.307 Number of I/O Submission Queues: 127 00:17:26.307 Number of I/O Completion Queues: 127 00:17:26.307 00:17:26.307 Active Namespaces 00:17:26.307 ================= 00:17:26.307 Namespace ID:1 00:17:26.307 Error Recovery Timeout: Unlimited 00:17:26.307 Command Set Identifier: NVM (00h) 00:17:26.307 Deallocate: Supported 00:17:26.307 Deallocated/Unwritten Error: Not Supported 00:17:26.307 Deallocated Read Value: Unknown 00:17:26.307 Deallocate in Write Zeroes: Not Supported 00:17:26.307 Deallocated Guard Field: 0xFFFF 00:17:26.307 Flush: Supported 00:17:26.307 Reservation: Supported 00:17:26.307 Namespace Sharing Capabilities: Multiple Controllers 00:17:26.307 Size (in LBAs): 131072 (0GiB) 00:17:26.307 Capacity (in LBAs): 131072 (0GiB) 00:17:26.307 Utilization (in LBAs): 131072 (0GiB) 00:17:26.307 NGUID: 42E956C1138F4AADB35CED8806229586 00:17:26.307 UUID: 42e956c1-138f-4aad-b35c-ed8806229586 00:17:26.307 Thin Provisioning: Not Supported 00:17:26.307 Per-NS Atomic Units: Yes 00:17:26.307 Atomic Boundary Size (Normal): 0 00:17:26.307 Atomic Boundary Size (PFail): 0 00:17:26.307 Atomic Boundary Offset: 0 00:17:26.307 Maximum Single Source Range Length: 65535 00:17:26.307 Maximum Copy Length: 65535 00:17:26.307 Maximum Source Range Count: 1 00:17:26.307 NGUID/EUI64 Never Reused: No 00:17:26.307 Namespace Write Protected: No 00:17:26.307 Number of LBA Formats: 1 00:17:26.307 Current LBA Format: LBA Format #00 00:17:26.307 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:26.307 00:17:26.307 09:03:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:17:26.307 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.566 [2024-07-24 09:03:04.545914] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:31.835 Initializing NVMe Controllers 00:17:31.835 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:31.835 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:31.835 Initialization complete. Launching workers. 00:17:31.835 ======================================================== 00:17:31.835 Latency(us) 00:17:31.835 Device Information : IOPS MiB/s Average min max 00:17:31.835 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35450.16 138.48 3610.20 1146.69 7599.36 00:17:31.835 ======================================================== 00:17:31.835 Total : 35450.16 138.48 3610.20 1146.69 7599.36 00:17:31.835 00:17:31.835 [2024-07-24 09:03:09.650479] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:31.835 09:03:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:31.835 EAL: No free 2048 kB hugepages reported on node 1 00:17:31.835 [2024-07-24 09:03:09.891345] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:37.117 Initializing NVMe Controllers 00:17:37.117 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:37.117 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:37.117 Initialization complete. Launching workers. 00:17:37.117 ======================================================== 00:17:37.117 Latency(us) 00:17:37.117 Device Information : IOPS MiB/s Average min max 00:17:37.117 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32743.38 127.90 3911.34 1194.56 8532.37 00:17:37.118 ======================================================== 00:17:37.118 Total : 32743.38 127.90 3911.34 1194.56 8532.37 00:17:37.118 00:17:37.118 [2024-07-24 09:03:14.914498] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:37.118 09:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:37.118 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.118 [2024-07-24 09:03:15.120143] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:42.460 [2024-07-24 09:03:20.271256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:42.460 Initializing NVMe Controllers 00:17:42.460 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.460 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:42.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:42.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:42.460 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:42.460 Initialization complete. Launching workers. 00:17:42.460 Starting thread on core 2 00:17:42.460 Starting thread on core 3 00:17:42.460 Starting thread on core 1 00:17:42.460 09:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:42.460 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.720 [2024-07-24 09:03:20.581662] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.012 [2024-07-24 09:03:23.654361] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:46.012 Initializing NVMe Controllers 00:17:46.012 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.012 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.012 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:46.012 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:46.012 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:46.012 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:46.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:46.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:46.012 Initialization complete. Launching workers. 00:17:46.012 Starting thread on core 1 with urgent priority queue 00:17:46.012 Starting thread on core 2 with urgent priority queue 00:17:46.012 Starting thread on core 3 with urgent priority queue 00:17:46.012 Starting thread on core 0 with urgent priority queue 00:17:46.012 SPDK bdev Controller (SPDK2 ) core 0: 3982.33 IO/s 25.11 secs/100000 ios 00:17:46.012 SPDK bdev Controller (SPDK2 ) core 1: 4429.00 IO/s 22.58 secs/100000 ios 00:17:46.012 SPDK bdev Controller (SPDK2 ) core 2: 3808.33 IO/s 26.26 secs/100000 ios 00:17:46.012 SPDK bdev Controller (SPDK2 ) core 3: 4579.33 IO/s 21.84 secs/100000 ios 00:17:46.012 ======================================================== 00:17:46.012 00:17:46.012 09:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:46.012 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.012 [2024-07-24 09:03:23.951561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:46.012 Initializing NVMe Controllers 00:17:46.012 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.012 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:46.012 Namespace ID: 1 size: 0GB 00:17:46.012 Initialization complete. 00:17:46.012 INFO: using host memory buffer for IO 00:17:46.012 Hello world! 00:17:46.012 [2024-07-24 09:03:23.961761] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:46.012 09:03:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:46.012 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.271 [2024-07-24 09:03:24.261746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.649 Initializing NVMe Controllers 00:17:47.649 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.649 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:47.649 Initialization complete. Launching workers. 00:17:47.649 submit (in ns) avg, min, max = 9536.1, 3585.6, 4016900.0 00:17:47.649 complete (in ns) avg, min, max = 25183.3, 2060.0, 4025590.0 00:17:47.649 00:17:47.649 Submit histogram 00:17:47.649 ================ 00:17:47.649 Range in us Cumulative Count 00:17:47.649 3.579 - 3.603: 0.5309% ( 72) 00:17:47.649 3.603 - 3.627: 6.7610% ( 845) 00:17:47.649 3.627 - 3.650: 18.4325% ( 1583) 00:17:47.649 3.650 - 3.674: 30.6643% ( 1659) 00:17:47.649 3.674 - 3.698: 39.6667% ( 1221) 00:17:47.649 3.698 - 3.721: 46.9144% ( 983) 00:17:47.649 3.721 - 3.745: 53.4616% ( 888) 00:17:47.649 3.745 - 3.769: 59.3600% ( 800) 00:17:47.649 3.769 - 3.793: 64.7866% ( 736) 00:17:47.649 3.793 - 3.816: 68.8417% ( 550) 00:17:47.649 3.816 - 3.840: 72.1374% ( 447) 00:17:47.649 3.840 - 3.864: 74.9244% ( 378) 00:17:47.649 3.864 - 3.887: 78.3824% ( 469) 00:17:47.649 3.887 - 3.911: 82.1500% ( 511) 00:17:47.649 3.911 - 3.935: 85.4310% ( 445) 00:17:47.649 3.935 - 3.959: 87.5249% ( 284) 00:17:47.649 3.959 - 3.982: 89.1912% ( 226) 00:17:47.649 3.982 - 4.006: 90.8206% ( 221) 00:17:47.649 4.006 - 4.030: 92.4648% ( 223) 00:17:47.649 4.030 - 4.053: 94.0869% ( 220) 00:17:47.649 4.053 - 4.077: 95.1412% ( 143) 00:17:47.649 4.077 - 4.101: 95.9670% ( 112) 00:17:47.649 4.101 - 4.124: 96.4315% ( 63) 00:17:47.649 4.124 - 4.148: 96.7485% ( 43) 00:17:47.649 4.148 - 4.172: 96.9697% ( 30) 00:17:47.649 4.172 - 4.196: 97.0950% ( 17) 00:17:47.649 4.196 - 4.219: 97.2130% ( 16) 00:17:47.649 4.219 - 4.243: 97.2499% ( 5) 00:17:47.649 4.243 - 4.267: 97.3310% ( 11) 00:17:47.649 4.267 - 4.290: 97.4047% ( 10) 00:17:47.649 4.290 - 4.314: 97.4858% ( 11) 00:17:47.649 4.314 - 4.338: 97.5522% ( 9) 00:17:47.649 4.338 - 4.361: 97.6185% ( 9) 00:17:47.649 4.361 - 4.385: 97.6333% ( 2) 00:17:47.649 4.385 - 4.409: 97.6480% ( 2) 00:17:47.649 4.409 - 4.433: 97.6701% ( 3) 00:17:47.649 4.433 - 4.456: 97.6849% ( 2) 00:17:47.649 4.480 - 4.504: 97.6996% ( 2) 00:17:47.649 4.504 - 4.527: 97.7070% ( 1) 00:17:47.649 4.527 - 4.551: 97.7217% ( 2) 00:17:47.649 4.575 - 4.599: 97.7291% ( 1) 00:17:47.649 4.599 - 4.622: 97.7365% ( 1) 00:17:47.649 4.622 - 4.646: 97.7439% ( 1) 00:17:47.649 4.646 - 4.670: 97.7807% ( 5) 00:17:47.649 4.670 - 4.693: 97.7955% ( 2) 00:17:47.649 4.693 - 4.717: 97.8397% ( 6) 00:17:47.649 4.717 - 4.741: 97.8471% ( 1) 00:17:47.649 4.741 - 4.764: 97.8692% ( 3) 00:17:47.649 4.764 - 4.788: 97.9208% ( 7) 00:17:47.649 4.788 - 4.812: 97.9724% ( 7) 00:17:47.649 4.812 - 4.836: 98.0167% ( 6) 00:17:47.649 4.836 - 4.859: 98.0978% ( 11) 00:17:47.649 4.859 - 4.883: 98.1125% ( 2) 00:17:47.649 4.883 - 4.907: 98.1494% ( 5) 00:17:47.649 4.907 - 4.930: 98.2157% ( 9) 00:17:47.649 4.930 - 4.954: 98.2379% ( 3) 00:17:47.649 4.954 - 4.978: 98.2821% ( 6) 00:17:47.649 4.978 - 5.001: 98.3116% ( 4) 00:17:47.649 5.001 - 5.025: 98.3190% ( 1) 00:17:47.649 5.025 - 5.049: 98.3411% ( 3) 00:17:47.649 5.049 - 5.073: 98.3632% ( 3) 00:17:47.650 5.073 - 5.096: 98.3853% ( 3) 00:17:47.650 5.096 - 5.120: 98.4001% ( 2) 00:17:47.650 5.120 - 5.144: 98.4148% ( 2) 00:17:47.650 5.144 - 5.167: 98.4222% ( 1) 00:17:47.650 5.191 - 5.215: 98.4443% ( 3) 00:17:47.650 5.215 - 5.239: 98.4517% ( 1) 00:17:47.650 5.262 - 5.286: 98.4590% ( 1) 00:17:47.650 5.310 - 5.333: 98.4664% ( 1) 00:17:47.650 5.333 - 5.357: 98.4738% ( 1) 00:17:47.650 5.476 - 5.499: 98.4812% ( 1) 00:17:47.650 5.499 - 5.523: 98.4885% ( 1) 00:17:47.650 5.594 - 5.618: 98.4959% ( 1) 00:17:47.650 6.163 - 6.210: 98.5033% ( 1) 00:17:47.650 6.305 - 6.353: 98.5107% ( 1) 00:17:47.650 6.590 - 6.637: 98.5180% ( 1) 00:17:47.650 6.637 - 6.684: 98.5328% ( 2) 00:17:47.650 6.732 - 6.779: 98.5401% ( 1) 00:17:47.650 6.827 - 6.874: 98.5475% ( 1) 00:17:47.650 6.874 - 6.921: 98.5623% ( 2) 00:17:47.650 6.921 - 6.969: 98.5696% ( 1) 00:17:47.650 6.969 - 7.016: 98.5770% ( 1) 00:17:47.650 7.064 - 7.111: 98.5844% ( 1) 00:17:47.650 7.538 - 7.585: 98.5918% ( 1) 00:17:47.650 7.585 - 7.633: 98.6065% ( 2) 00:17:47.650 7.633 - 7.680: 98.6434% ( 5) 00:17:47.650 7.680 - 7.727: 98.6581% ( 2) 00:17:47.650 7.775 - 7.822: 98.6729% ( 2) 00:17:47.650 7.822 - 7.870: 98.6876% ( 2) 00:17:47.650 7.917 - 7.964: 98.7024% ( 2) 00:17:47.650 7.964 - 8.012: 98.7097% ( 1) 00:17:47.650 8.059 - 8.107: 98.7171% ( 1) 00:17:47.650 8.107 - 8.154: 98.7245% ( 1) 00:17:47.650 8.249 - 8.296: 98.7318% ( 1) 00:17:47.650 8.439 - 8.486: 98.7466% ( 2) 00:17:47.650 8.628 - 8.676: 98.7540% ( 1) 00:17:47.650 8.818 - 8.865: 98.7613% ( 1) 00:17:47.650 8.865 - 8.913: 98.7687% ( 1) 00:17:47.650 9.007 - 9.055: 98.7761% ( 1) 00:17:47.650 9.244 - 9.292: 98.7835% ( 1) 00:17:47.650 9.434 - 9.481: 98.7908% ( 1) 00:17:47.650 9.529 - 9.576: 98.7982% ( 1) 00:17:47.650 9.576 - 9.624: 98.8056% ( 1) 00:17:47.650 9.766 - 9.813: 98.8129% ( 1) 00:17:47.650 10.003 - 10.050: 98.8203% ( 1) 00:17:47.650 10.240 - 10.287: 98.8351% ( 2) 00:17:47.650 10.809 - 10.856: 98.8424% ( 1) 00:17:47.650 10.904 - 10.951: 98.8498% ( 1) 00:17:47.650 10.951 - 10.999: 98.8572% ( 1) 00:17:47.650 11.141 - 11.188: 98.8646% ( 1) 00:17:47.650 11.757 - 11.804: 98.8719% ( 1) 00:17:47.650 11.852 - 11.899: 98.8793% ( 1) 00:17:47.650 11.947 - 11.994: 98.8867% ( 1) 00:17:47.650 12.231 - 12.326: 98.8940% ( 1) 00:17:47.650 12.516 - 12.610: 98.9088% ( 2) 00:17:47.650 12.705 - 12.800: 98.9162% ( 1) 00:17:47.650 14.317 - 14.412: 98.9235% ( 1) 00:17:47.650 14.601 - 14.696: 98.9309% ( 1) 00:17:47.650 16.972 - 17.067: 98.9383% ( 1) 00:17:47.650 17.067 - 17.161: 98.9530% ( 2) 00:17:47.650 17.161 - 17.256: 98.9678% ( 2) 00:17:47.650 17.256 - 17.351: 98.9825% ( 2) 00:17:47.650 17.351 - 17.446: 98.9899% ( 1) 00:17:47.650 17.446 - 17.541: 99.0268% ( 5) 00:17:47.650 17.541 - 17.636: 99.0710% ( 6) 00:17:47.650 17.636 - 17.730: 99.1595% ( 12) 00:17:47.650 17.730 - 17.825: 99.1963% ( 5) 00:17:47.650 17.825 - 17.920: 99.2627% ( 9) 00:17:47.650 17.920 - 18.015: 99.3438% ( 11) 00:17:47.650 18.015 - 18.110: 99.3880% ( 6) 00:17:47.650 18.110 - 18.204: 99.4249% ( 5) 00:17:47.650 18.204 - 18.299: 99.4544% ( 4) 00:17:47.650 18.299 - 18.394: 99.4986% ( 6) 00:17:47.650 18.394 - 18.489: 99.5871% ( 12) 00:17:47.650 18.489 - 18.584: 99.6608% ( 10) 00:17:47.650 18.584 - 18.679: 99.7051% ( 6) 00:17:47.650 18.773 - 18.868: 99.7198% ( 2) 00:17:47.650 18.868 - 18.963: 99.7493% ( 4) 00:17:47.650 19.058 - 19.153: 99.7641% ( 2) 00:17:47.650 19.153 - 19.247: 99.7788% ( 2) 00:17:47.650 19.247 - 19.342: 99.7862% ( 1) 00:17:47.650 19.342 - 19.437: 99.8009% ( 2) 00:17:47.650 19.437 - 19.532: 99.8083% ( 1) 00:17:47.650 19.532 - 19.627: 99.8157% ( 1) 00:17:47.650 19.627 - 19.721: 99.8304% ( 2) 00:17:47.650 21.618 - 21.713: 99.8452% ( 2) 00:17:47.650 24.462 - 24.652: 99.8525% ( 1) 00:17:47.650 26.548 - 26.738: 99.8599% ( 1) 00:17:47.650 3980.705 - 4004.978: 99.9853% ( 17) 00:17:47.650 4004.978 - 4029.250: 100.0000% ( 2) 00:17:47.650 00:17:47.650 Complete histogram 00:17:47.650 ================== 00:17:47.650 Range in us Cumulative Count 00:17:47.650 2.050 - 2.062: 0.0221% ( 3) 00:17:47.650 2.062 - 2.074: 15.6676% ( 2122) 00:17:47.650 2.074 - 2.086: 45.8232% ( 4090) 00:17:47.650 2.086 - 2.098: 47.6738% ( 251) 00:17:47.650 2.098 - 2.110: 56.5509% ( 1204) 00:17:47.650 2.110 - 2.121: 62.5525% ( 814) 00:17:47.650 2.121 - 2.133: 64.5432% ( 270) 00:17:47.650 2.133 - 2.145: 75.0793% ( 1429) 00:17:47.650 2.145 - 2.157: 81.8624% ( 920) 00:17:47.650 2.157 - 2.169: 83.0126% ( 156) 00:17:47.650 2.169 - 2.181: 87.5101% ( 610) 00:17:47.650 2.181 - 2.193: 89.6852% ( 295) 00:17:47.650 2.193 - 2.204: 90.4151% ( 99) 00:17:47.650 2.204 - 2.216: 91.8012% ( 188) 00:17:47.650 2.216 - 2.228: 94.0942% ( 311) 00:17:47.650 2.228 - 2.240: 94.8242% ( 99) 00:17:47.650 2.240 - 2.252: 95.2960% ( 64) 00:17:47.650 2.252 - 2.264: 95.5467% ( 34) 00:17:47.650 2.264 - 2.276: 95.6426% ( 13) 00:17:47.650 2.276 - 2.287: 95.7900% ( 20) 00:17:47.650 2.287 - 2.299: 96.1071% ( 43) 00:17:47.650 2.299 - 2.311: 96.2988% ( 26) 00:17:47.650 2.311 - 2.323: 96.3872% ( 12) 00:17:47.650 2.323 - 2.335: 96.4462% ( 8) 00:17:47.650 2.335 - 2.347: 96.5421% ( 13) 00:17:47.650 2.347 - 2.359: 96.7190% ( 24) 00:17:47.650 2.359 - 2.370: 97.0582% ( 46) 00:17:47.650 2.370 - 2.382: 97.3826% ( 44) 00:17:47.650 2.382 - 2.394: 97.6480% ( 36) 00:17:47.650 2.394 - 2.406: 97.8839% ( 32) 00:17:47.651 2.406 - 2.418: 98.0462% ( 22) 00:17:47.651 2.418 - 2.430: 98.2452% ( 27) 00:17:47.651 2.430 - 2.441: 98.3411% ( 13) 00:17:47.651 2.441 - 2.453: 98.4222% ( 11) 00:17:47.651 2.453 - 2.465: 98.4517% ( 4) 00:17:47.651 2.465 - 2.477: 98.4959% ( 6) 00:17:47.651 2.477 - 2.489: 98.5254% ( 4) 00:17:47.651 2.489 - 2.501: 98.5475% ( 3) 00:17:47.651 2.501 - 2.513: 98.5696% ( 3) 00:17:47.651 2.513 - 2.524: 98.5918% ( 3) 00:17:47.651 2.524 - 2.536: 98.5991% ( 1) 00:17:47.651 2.536 - 2.548: 98.6065% ( 1) 00:17:47.651 2.548 - 2.560: 98.6139% ( 1) 00:17:47.651 2.584 - 2.596: 98.6212% ( 1) 00:17:47.651 2.596 - 2.607: 98.6286% ( 1) 00:17:47.651 2.619 - 2.631: 98.6360% ( 1) 00:17:47.651 2.667 - 2.679: 98.6434% ( 1) 00:17:47.651 2.750 - 2.761: 98.6507% ( 1) 00:17:47.651 2.797 - 2.809: 98.6655% ( 2) 00:17:47.651 3.058 - 3.081: 98.6729% ( 1) 00:17:47.651 3.271 - 3.295: 98.6876% ( 2) 00:17:47.651 3.319 - 3.342: 98.6950% ( 1) 00:17:47.651 3.366 - 3.390: 98.7024% ( 1) 00:17:47.651 3.390 - 3.413: 98.7097% ( 1) 00:17:47.651 3.532 - 3.556: 98.7171% ( 1) 00:17:47.651 3.745 - 3.769: 98.7245% ( 1) 00:17:47.651 3.840 - 3.864: 98.7318% ( 1) 00:17:47.651 4.006 - 4.030: 98.7392% ( 1) 00:17:47.651 4.053 - 4.077: 98.7466% ( 1) 00:17:47.651 4.504 - 4.527: 98.7540% ( 1) 00:17:47.651 4.954 - 4.978: 98.7613% ( 1) 00:17:47.651 5.262 - 5.286: 98.7687% ( 1) 00:17:47.651 5.381 - 5.404: 98.7761% ( 1) 00:17:47.651 5.499 - 5.523: 98.7835% ( 1) 00:17:47.651 5.689 - 5.713: 98.7908% ( 1) 00:17:47.651 5.713 - 5.736: 98.7982% ( 1) 00:17:47.651 5.902 - 5.926: 98.8056% ( 1) 00:17:47.651 6.068 - 6.116: 98.8129% ( 1) 00:17:47.651 6.116 - 6.163: 98.8203% ( 1) 00:17:47.651 6.305 - 6.353: 98.8277% ( 1) 00:17:47.651 6.447 - 6.495: 98.8351% ( 1) 00:17:47.651 6.495 - 6.542: 98.8498% ( 2) 00:17:47.651 6.684 - 6.732: 98.8572% ( 1) 00:17:47.651 6.921 - 6.969: 98.8646% ( 1) 00:17:47.651 7.301 - 7.348: 98.8719% ( 1) 00:17:47.651 7.680 - 7.727: 98.8793% ( 1) 00:17:47.651 8.960 - 9.007: 98.8867% ( 1) 00:17:47.651 9.387 - 9.434: 98.8940% ( 1) 00:17:47.651 10.382 - 10.430: 98.9014% ( 1) 00:17:47.651 15.360 - 15.455: 98.9088% ( 1) 00:17:47.651 15.455 - 15.550: 98.9235% ( 2) 00:17:47.651 15.550 - 15.644: 98.9457% ( 3) 00:17:47.651 15.644 - 15.739: 98.9530% ( 1) 00:17:47.651 15.739 - 15.834: 98.9604% ( 1) 00:17:47.651 15.834 - 15.929: 98.9752% ( 2) 00:17:47.651 15.929 - 16.024: 98.9899% ( 2) 00:17:47.651 16.024 - 16.119: 99.0120% ( 3) 00:17:47.651 16.119 - 16.213: 99.0489% ( 5) 00:17:47.651 16.213 - 16.308: 99.0784% ( 4) 00:17:47.651 16.308 - 16.403: 99.0931% ( 2) 00:17:47.651 16.403 - 16.498: 99.1300% ( 5) 00:17:47.651 16.498 - 16.593: 99.1521% ( 3) 00:17:47.651 16.593 - 16.687: 99.1669% ( 2) 00:17:47.651 16.687 - 16.782: 99.1963% ( 4) 00:17:47.651 16.782 - 16.877: 99.2111% ( 2) 00:17:47.651 16.877 - 16.972: 99.2332% ( 3) 00:17:47.651 16.972 - 17.067: 99.2701% ( 5) 00:17:47.651 17.161 - 17.256: 99.2774% ( 1) 00:17:47.651 17.256 - 17.351: 99.2922% ( 2) 00:17:47.651 17.351 - 17.446: 99.3069% ( 2) 00:17:47.651 17.446 - 17.541: 99.3217% ( 2) 00:17:47.651 17.541 - 17.636: 9[2024-07-24 09:03:25.360839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:47.651 9.3438% ( 3) 00:17:47.651 17.636 - 17.730: 99.3512% ( 1) 00:17:47.651 17.825 - 17.920: 99.3585% ( 1) 00:17:47.651 17.920 - 18.015: 99.3659% ( 1) 00:17:47.651 18.110 - 18.204: 99.3807% ( 2) 00:17:47.651 18.204 - 18.299: 99.4102% ( 4) 00:17:47.651 18.679 - 18.773: 99.4249% ( 2) 00:17:47.651 3592.344 - 3616.616: 99.4323% ( 1) 00:17:47.651 3859.342 - 3883.615: 99.4397% ( 1) 00:17:47.651 3980.705 - 4004.978: 99.8009% ( 49) 00:17:47.651 4004.978 - 4029.250: 100.0000% ( 27) 00:17:47.651 00:17:47.651 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:47.651 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:47.651 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:47.651 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:47.651 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:47.651 [ 00:17:47.651 { 00:17:47.651 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:47.651 "subtype": "Discovery", 00:17:47.651 "listen_addresses": [], 00:17:47.651 "allow_any_host": true, 00:17:47.651 "hosts": [] 00:17:47.651 }, 00:17:47.651 { 00:17:47.651 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:47.651 "subtype": "NVMe", 00:17:47.651 "listen_addresses": [ 00:17:47.651 { 00:17:47.651 "trtype": "VFIOUSER", 00:17:47.651 "adrfam": "IPv4", 00:17:47.651 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:47.651 "trsvcid": "0" 00:17:47.651 } 00:17:47.651 ], 00:17:47.651 "allow_any_host": true, 00:17:47.651 "hosts": [], 00:17:47.651 "serial_number": "SPDK1", 00:17:47.651 "model_number": "SPDK bdev Controller", 00:17:47.651 "max_namespaces": 32, 00:17:47.651 "min_cntlid": 1, 00:17:47.651 "max_cntlid": 65519, 00:17:47.651 "namespaces": [ 00:17:47.651 { 00:17:47.651 "nsid": 1, 00:17:47.651 "bdev_name": "Malloc1", 00:17:47.651 "name": "Malloc1", 00:17:47.651 "nguid": "29B1420C7D88490586F17B1FD1502216", 00:17:47.651 "uuid": "29b1420c-7d88-4905-86f1-7b1fd1502216" 00:17:47.652 }, 00:17:47.652 { 00:17:47.652 "nsid": 2, 00:17:47.652 "bdev_name": "Malloc3", 00:17:47.652 "name": "Malloc3", 00:17:47.652 "nguid": "09C159177DB1463DB22A0B87221759DF", 00:17:47.652 "uuid": "09c15917-7db1-463d-b22a-0b87221759df" 00:17:47.652 } 00:17:47.652 ] 00:17:47.652 }, 00:17:47.652 { 00:17:47.652 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:47.652 "subtype": "NVMe", 00:17:47.652 "listen_addresses": [ 00:17:47.652 { 00:17:47.652 "trtype": "VFIOUSER", 00:17:47.652 "adrfam": "IPv4", 00:17:47.652 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:47.652 "trsvcid": "0" 00:17:47.652 } 00:17:47.652 ], 00:17:47.652 "allow_any_host": true, 00:17:47.652 "hosts": [], 00:17:47.652 "serial_number": "SPDK2", 00:17:47.652 "model_number": "SPDK bdev Controller", 00:17:47.652 "max_namespaces": 32, 00:17:47.652 "min_cntlid": 1, 00:17:47.652 "max_cntlid": 65519, 00:17:47.652 "namespaces": [ 00:17:47.652 { 00:17:47.652 "nsid": 1, 00:17:47.652 "bdev_name": "Malloc2", 00:17:47.652 "name": "Malloc2", 00:17:47.652 "nguid": "42E956C1138F4AADB35CED8806229586", 00:17:47.652 "uuid": "42e956c1-138f-4aad-b35c-ed8806229586" 00:17:47.652 } 00:17:47.652 ] 00:17:47.652 } 00:17:47.652 ] 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3769260 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:47.652 09:03:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:47.652 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.910 [2024-07-24 09:03:25.860612] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:47.910 Malloc4 00:17:47.910 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:48.169 [2024-07-24 09:03:26.246511] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:48.169 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:48.427 Asynchronous Event Request test 00:17:48.427 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.427 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:48.427 Registering asynchronous event callbacks... 00:17:48.427 Starting namespace attribute notice tests for all controllers... 00:17:48.427 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:48.427 aer_cb - Changed Namespace 00:17:48.427 Cleaning up... 00:17:48.427 [ 00:17:48.427 { 00:17:48.427 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:48.427 "subtype": "Discovery", 00:17:48.427 "listen_addresses": [], 00:17:48.427 "allow_any_host": true, 00:17:48.427 "hosts": [] 00:17:48.427 }, 00:17:48.427 { 00:17:48.427 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:48.427 "subtype": "NVMe", 00:17:48.427 "listen_addresses": [ 00:17:48.427 { 00:17:48.427 "trtype": "VFIOUSER", 00:17:48.427 "adrfam": "IPv4", 00:17:48.427 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:48.427 "trsvcid": "0" 00:17:48.427 } 00:17:48.427 ], 00:17:48.427 "allow_any_host": true, 00:17:48.427 "hosts": [], 00:17:48.427 "serial_number": "SPDK1", 00:17:48.427 "model_number": "SPDK bdev Controller", 00:17:48.427 "max_namespaces": 32, 00:17:48.427 "min_cntlid": 1, 00:17:48.427 "max_cntlid": 65519, 00:17:48.427 "namespaces": [ 00:17:48.427 { 00:17:48.427 "nsid": 1, 00:17:48.427 "bdev_name": "Malloc1", 00:17:48.427 "name": "Malloc1", 00:17:48.427 "nguid": "29B1420C7D88490586F17B1FD1502216", 00:17:48.427 "uuid": "29b1420c-7d88-4905-86f1-7b1fd1502216" 00:17:48.427 }, 00:17:48.427 { 00:17:48.427 "nsid": 2, 00:17:48.427 "bdev_name": "Malloc3", 00:17:48.427 "name": "Malloc3", 00:17:48.427 "nguid": "09C159177DB1463DB22A0B87221759DF", 00:17:48.427 "uuid": "09c15917-7db1-463d-b22a-0b87221759df" 00:17:48.427 } 00:17:48.427 ] 00:17:48.427 }, 00:17:48.427 { 00:17:48.427 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:48.427 "subtype": "NVMe", 00:17:48.427 "listen_addresses": [ 00:17:48.427 { 00:17:48.427 "trtype": "VFIOUSER", 00:17:48.427 "adrfam": "IPv4", 00:17:48.427 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:48.427 "trsvcid": "0" 00:17:48.427 } 00:17:48.427 ], 00:17:48.427 "allow_any_host": true, 00:17:48.427 "hosts": [], 00:17:48.427 "serial_number": "SPDK2", 00:17:48.427 "model_number": "SPDK bdev Controller", 00:17:48.427 "max_namespaces": 32, 00:17:48.427 "min_cntlid": 1, 00:17:48.427 "max_cntlid": 65519, 00:17:48.427 "namespaces": [ 00:17:48.427 { 00:17:48.427 "nsid": 1, 00:17:48.427 "bdev_name": "Malloc2", 00:17:48.427 "name": "Malloc2", 00:17:48.427 "nguid": "42E956C1138F4AADB35CED8806229586", 00:17:48.427 "uuid": "42e956c1-138f-4aad-b35c-ed8806229586" 00:17:48.427 }, 00:17:48.427 { 00:17:48.427 "nsid": 2, 00:17:48.427 "bdev_name": "Malloc4", 00:17:48.427 "name": "Malloc4", 00:17:48.427 "nguid": "612E8F784AF144C78805B6DFBB4339B1", 00:17:48.427 "uuid": "612e8f78-4af1-44c7-8805-b6dfbb4339b1" 00:17:48.427 } 00:17:48.427 ] 00:17:48.427 } 00:17:48.427 ] 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3769260 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3763121 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3763121 ']' 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3763121 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3763121 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3763121' 00:17:48.427 killing process with pid 3763121 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3763121 00:17:48.427 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3763121 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3769404 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3769404' 00:17:48.996 Process pid: 3769404 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3769404 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3769404 ']' 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:48.996 09:03:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:48.996 [2024-07-24 09:03:26.928083] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:48.996 [2024-07-24 09:03:26.929112] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:17:48.996 [2024-07-24 09:03:26.929189] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.996 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.996 [2024-07-24 09:03:26.961457] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:48.996 [2024-07-24 09:03:26.995032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:48.996 [2024-07-24 09:03:27.089796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.996 [2024-07-24 09:03:27.089856] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.996 [2024-07-24 09:03:27.089872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.996 [2024-07-24 09:03:27.089885] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.996 [2024-07-24 09:03:27.089896] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.996 [2024-07-24 09:03:27.089977] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.996 [2024-07-24 09:03:27.090056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:48.996 [2024-07-24 09:03:27.090156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.996 [2024-07-24 09:03:27.090159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.255 [2024-07-24 09:03:27.197148] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:49.255 [2024-07-24 09:03:27.197380] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:49.255 [2024-07-24 09:03:27.197635] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:49.255 [2024-07-24 09:03:27.198337] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:49.255 [2024-07-24 09:03:27.198571] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:49.255 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:49.255 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:17:49.255 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:50.189 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:50.448 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:50.448 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:50.448 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:50.448 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:50.448 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:50.706 Malloc1 00:17:50.706 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:50.964 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:51.532 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:51.532 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:51.532 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:51.532 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:51.792 Malloc2 00:17:51.792 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:52.050 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:52.618 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3769404 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3769404 ']' 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3769404 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3769404 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3769404' 00:17:52.879 killing process with pid 3769404 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3769404 00:17:52.879 09:03:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3769404 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:53.138 00:17:53.138 real 0m52.775s 00:17:53.138 user 3m28.248s 00:17:53.138 sys 0m4.436s 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:53.138 ************************************ 00:17:53.138 END TEST nvmf_vfio_user 00:17:53.138 ************************************ 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.138 ************************************ 00:17:53.138 START TEST nvmf_vfio_user_nvme_compliance 00:17:53.138 ************************************ 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:53.138 * Looking for test storage... 00:17:53.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:53.138 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3770005 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3770005' 00:17:53.139 Process pid: 3770005 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3770005 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3770005 ']' 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:53.139 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:53.139 [2024-07-24 09:03:31.220852] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:17:53.139 [2024-07-24 09:03:31.220938] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:53.139 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.139 [2024-07-24 09:03:31.252998] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:53.398 [2024-07-24 09:03:31.279057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:53.398 [2024-07-24 09:03:31.364232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:53.398 [2024-07-24 09:03:31.364282] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:53.398 [2024-07-24 09:03:31.364310] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:53.398 [2024-07-24 09:03:31.364321] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:53.398 [2024-07-24 09:03:31.364338] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:53.398 [2024-07-24 09:03:31.364482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.398 [2024-07-24 09:03:31.364548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.398 [2024-07-24 09:03:31.364551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.398 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:53.398 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:17:53.398 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.779 malloc0 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.779 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:54.779 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.779 00:17:54.779 00:17:54.779 CUnit - A unit testing framework for C - Version 2.1-3 00:17:54.779 http://cunit.sourceforge.net/ 00:17:54.779 00:17:54.779 00:17:54.779 Suite: nvme_compliance 00:17:54.779 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 09:03:32.710599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.779 [2024-07-24 09:03:32.712025] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:54.779 [2024-07-24 09:03:32.712048] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:54.779 [2024-07-24 09:03:32.712074] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:54.779 [2024-07-24 09:03:32.713616] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.779 passed 00:17:54.779 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 09:03:32.798174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:54.779 [2024-07-24 09:03:32.801194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:54.779 passed 00:17:54.779 Test: admin_identify_ns ...[2024-07-24 09:03:32.887594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.038 [2024-07-24 09:03:32.948121] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:55.038 [2024-07-24 09:03:32.956120] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:55.038 [2024-07-24 09:03:32.977234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.038 passed 00:17:55.038 Test: admin_get_features_mandatory_features ...[2024-07-24 09:03:33.061053] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.038 [2024-07-24 09:03:33.064074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.038 passed 00:17:55.038 Test: admin_get_features_optional_features ...[2024-07-24 09:03:33.147676] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.038 [2024-07-24 09:03:33.150698] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.298 passed 00:17:55.298 Test: admin_set_features_number_of_queues ...[2024-07-24 09:03:33.232868] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.298 [2024-07-24 09:03:33.337221] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.298 passed 00:17:55.558 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 09:03:33.423348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.558 [2024-07-24 09:03:33.426373] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.558 passed 00:17:55.558 Test: admin_get_log_page_with_lpo ...[2024-07-24 09:03:33.508716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.558 [2024-07-24 09:03:33.577115] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:55.558 [2024-07-24 09:03:33.590194] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.558 passed 00:17:55.558 Test: fabric_property_get ...[2024-07-24 09:03:33.672684] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.558 [2024-07-24 09:03:33.673974] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:55.818 [2024-07-24 09:03:33.675710] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.818 passed 00:17:55.818 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 09:03:33.758229] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.818 [2024-07-24 09:03:33.759526] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:55.818 [2024-07-24 09:03:33.761253] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:55.818 passed 00:17:55.818 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 09:03:33.844361] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:55.818 [2024-07-24 09:03:33.928127] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.077 [2024-07-24 09:03:33.944126] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.077 [2024-07-24 09:03:33.949214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.077 passed 00:17:56.077 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 09:03:34.033262] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.077 [2024-07-24 09:03:34.034589] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:56.077 [2024-07-24 09:03:34.036281] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.077 passed 00:17:56.077 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 09:03:34.120587] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.336 [2024-07-24 09:03:34.196114] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:56.336 [2024-07-24 09:03:34.220115] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:56.336 [2024-07-24 09:03:34.225204] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.336 passed 00:17:56.336 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 09:03:34.308832] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.336 [2024-07-24 09:03:34.310155] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:56.336 [2024-07-24 09:03:34.310196] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:56.336 [2024-07-24 09:03:34.311857] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.336 passed 00:17:56.336 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 09:03:34.394979] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.593 [2024-07-24 09:03:34.488114] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:56.593 [2024-07-24 09:03:34.496113] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:56.594 [2024-07-24 09:03:34.504123] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:56.594 [2024-07-24 09:03:34.512114] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:56.594 [2024-07-24 09:03:34.541236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.594 passed 00:17:56.594 Test: admin_create_io_sq_verify_pc ...[2024-07-24 09:03:34.620814] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:56.594 [2024-07-24 09:03:34.637127] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:56.594 [2024-07-24 09:03:34.655181] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:56.594 passed 00:17:56.852 Test: admin_create_io_qp_max_qps ...[2024-07-24 09:03:34.739730] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:57.817 [2024-07-24 09:03:35.848119] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:17:58.385 [2024-07-24 09:03:36.235633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:58.385 passed 00:17:58.385 Test: admin_create_io_sq_shared_cq ...[2024-07-24 09:03:36.316879] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:58.385 [2024-07-24 09:03:36.448128] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:58.385 [2024-07-24 09:03:36.485214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:58.646 passed 00:17:58.646 00:17:58.646 Run Summary: Type Total Ran Passed Failed Inactive 00:17:58.646 suites 1 1 n/a 0 0 00:17:58.646 tests 18 18 18 0 0 00:17:58.646 asserts 360 360 360 0 n/a 00:17:58.646 00:17:58.646 Elapsed time = 1.563 seconds 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3770005 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3770005 ']' 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3770005 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3770005 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3770005' 00:17:58.646 killing process with pid 3770005 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3770005 00:17:58.646 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3770005 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:58.905 00:17:58.905 real 0m5.712s 00:17:58.905 user 0m16.078s 00:17:58.905 sys 0m0.564s 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:58.905 ************************************ 00:17:58.905 END TEST nvmf_vfio_user_nvme_compliance 00:17:58.905 ************************************ 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:58.905 ************************************ 00:17:58.905 START TEST nvmf_vfio_user_fuzz 00:17:58.905 ************************************ 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:58.905 * Looking for test storage... 00:17:58.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.905 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3770721 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3770721' 00:17:58.906 Process pid: 3770721 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3770721 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3770721 ']' 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.906 09:03:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:59.166 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.166 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:17:59.166 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 malloc0 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.544 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.545 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:18:00.545 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:32.632 Fuzzing completed. Shutting down the fuzz application 00:18:32.632 00:18:32.632 Dumping successful admin opcodes: 00:18:32.632 8, 9, 10, 24, 00:18:32.632 Dumping successful io opcodes: 00:18:32.632 0, 00:18:32.632 NS: 0x200003a1ef00 I/O qp, Total commands completed: 685065, total successful commands: 2667, random_seed: 108534400 00:18:32.632 NS: 0x200003a1ef00 admin qp, Total commands completed: 168712, total successful commands: 1375, random_seed: 1419448448 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3770721 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3770721 ']' 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3770721 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3770721 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3770721' 00:18:32.633 killing process with pid 3770721 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3770721 00:18:32.633 09:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3770721 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:32.633 00:18:32.633 real 0m32.247s 00:18:32.633 user 0m34.461s 00:18:32.633 sys 0m26.602s 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 ************************************ 00:18:32.633 END TEST nvmf_vfio_user_fuzz 00:18:32.633 ************************************ 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.633 ************************************ 00:18:32.633 START TEST nvmf_auth_target 00:18:32.633 ************************************ 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:32.633 * Looking for test storage... 00:18:32.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.633 09:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:18:33.200 Found 0000:09:00.0 (0x8086 - 0x159b) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:18:33.200 Found 0000:09:00.1 (0x8086 - 0x159b) 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.200 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:18:33.201 Found net devices under 0000:09:00.0: cvl_0_0 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:18:33.201 Found net devices under 0000:09:00.1: cvl_0_1 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:18:33.201 00:18:33.201 --- 10.0.0.2 ping statistics --- 00:18:33.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.201 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:18:33.201 00:18:33.201 --- 10.0.0.1 ping statistics --- 00:18:33.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.201 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3776065 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3776065 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3776065 ']' 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.201 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3776172 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=517e974b0c30ee61c4c4484a8511ffe5df90f03b728bf06e 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fS2 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 517e974b0c30ee61c4c4484a8511ffe5df90f03b728bf06e 0 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 517e974b0c30ee61c4c4484a8511ffe5df90f03b728bf06e 0 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=517e974b0c30ee61c4c4484a8511ffe5df90f03b728bf06e 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:33.459 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fS2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fS2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.fS2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=245f4ae855ad49c87bc414d23bfb88371207566672ebe62836529890e63dd577 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lfq 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 245f4ae855ad49c87bc414d23bfb88371207566672ebe62836529890e63dd577 3 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 245f4ae855ad49c87bc414d23bfb88371207566672ebe62836529890e63dd577 3 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=245f4ae855ad49c87bc414d23bfb88371207566672ebe62836529890e63dd577 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lfq 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lfq 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.lfq 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44b4f6520a7da1aad16d60d67b4eafbe 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.vSv 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44b4f6520a7da1aad16d60d67b4eafbe 1 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44b4f6520a7da1aad16d60d67b4eafbe 1 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44b4f6520a7da1aad16d60d67b4eafbe 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.vSv 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.vSv 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.vSv 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf61019faf9aa966f9500c38b422e432a957e1cbeb46e6d2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Ilf 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf61019faf9aa966f9500c38b422e432a957e1cbeb46e6d2 2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf61019faf9aa966f9500c38b422e432a957e1cbeb46e6d2 2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf61019faf9aa966f9500c38b422e432a957e1cbeb46e6d2 00:18:33.719 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Ilf 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Ilf 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.Ilf 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ddc6287465a3bbc868d3360c367a14bd304f6c27d5b415eb 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.QQg 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ddc6287465a3bbc868d3360c367a14bd304f6c27d5b415eb 2 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ddc6287465a3bbc868d3360c367a14bd304f6c27d5b415eb 2 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ddc6287465a3bbc868d3360c367a14bd304f6c27d5b415eb 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.QQg 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.QQg 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.QQg 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0d2e2adec6589d2d6b8dabd545834ba0 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.MSI 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0d2e2adec6589d2d6b8dabd545834ba0 1 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0d2e2adec6589d2d6b8dabd545834ba0 1 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0d2e2adec6589d2d6b8dabd545834ba0 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:33.720 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.MSI 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.MSI 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.MSI 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=e9e41d99ba256823535a33d075937909f14a91190c94ee9843ab3fc8d5d9cc71 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.sGO 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key e9e41d99ba256823535a33d075937909f14a91190c94ee9843ab3fc8d5d9cc71 3 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 e9e41d99ba256823535a33d075937909f14a91190c94ee9843ab3fc8d5d9cc71 3 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=e9e41d99ba256823535a33d075937909f14a91190c94ee9843ab3fc8d5d9cc71 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.sGO 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.sGO 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.sGO 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3776065 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3776065 ']' 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.978 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.979 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.979 09:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3776172 /var/tmp/host.sock 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3776172 ']' 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:34.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.237 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.fS2 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.fS2 00:18:34.495 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.fS2 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.lfq ]] 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lfq 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lfq 00:18:34.753 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lfq 00:18:35.010 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:35.011 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.vSv 00:18:35.011 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.011 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.011 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.011 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.vSv 00:18:35.011 09:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.vSv 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.Ilf ]] 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ilf 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ilf 00:18:35.268 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Ilf 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.QQg 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.QQg 00:18:35.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.QQg 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.MSI ]] 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MSI 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MSI 00:18:35.784 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.MSI 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.sGO 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.sGO 00:18:36.043 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.sGO 00:18:36.301 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:36.301 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:36.301 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:36.301 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.301 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.301 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:36.559 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.560 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.818 00:18:36.818 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.818 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.818 09:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.076 { 00:18:37.076 "cntlid": 1, 00:18:37.076 "qid": 0, 00:18:37.076 "state": "enabled", 00:18:37.076 "thread": "nvmf_tgt_poll_group_000", 00:18:37.076 "listen_address": { 00:18:37.076 "trtype": "TCP", 00:18:37.076 "adrfam": "IPv4", 00:18:37.076 "traddr": "10.0.0.2", 00:18:37.076 "trsvcid": "4420" 00:18:37.076 }, 00:18:37.076 "peer_address": { 00:18:37.076 "trtype": "TCP", 00:18:37.076 "adrfam": "IPv4", 00:18:37.076 "traddr": "10.0.0.1", 00:18:37.076 "trsvcid": "47248" 00:18:37.076 }, 00:18:37.076 "auth": { 00:18:37.076 "state": "completed", 00:18:37.076 "digest": "sha256", 00:18:37.076 "dhgroup": "null" 00:18:37.076 } 00:18:37.076 } 00:18:37.076 ]' 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.076 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.335 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.269 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:38.528 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.096 00:18:39.096 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.096 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.096 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.096 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.354 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.355 { 00:18:39.355 "cntlid": 3, 00:18:39.355 "qid": 0, 00:18:39.355 "state": "enabled", 00:18:39.355 "thread": "nvmf_tgt_poll_group_000", 00:18:39.355 "listen_address": { 00:18:39.355 "trtype": "TCP", 00:18:39.355 "adrfam": "IPv4", 00:18:39.355 "traddr": "10.0.0.2", 00:18:39.355 "trsvcid": "4420" 00:18:39.355 }, 00:18:39.355 "peer_address": { 00:18:39.355 "trtype": "TCP", 00:18:39.355 "adrfam": "IPv4", 00:18:39.355 "traddr": "10.0.0.1", 00:18:39.355 "trsvcid": "51070" 00:18:39.355 }, 00:18:39.355 "auth": { 00:18:39.355 "state": "completed", 00:18:39.355 "digest": "sha256", 00:18:39.355 "dhgroup": "null" 00:18:39.355 } 00:18:39.355 } 00:18:39.355 ]' 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.355 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.612 09:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.547 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.806 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:41.064 00:18:41.064 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:41.064 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.064 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:41.323 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.323 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.323 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.323 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.581 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.581 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.581 { 00:18:41.581 "cntlid": 5, 00:18:41.581 "qid": 0, 00:18:41.581 "state": "enabled", 00:18:41.582 "thread": "nvmf_tgt_poll_group_000", 00:18:41.582 "listen_address": { 00:18:41.582 "trtype": "TCP", 00:18:41.582 "adrfam": "IPv4", 00:18:41.582 "traddr": "10.0.0.2", 00:18:41.582 "trsvcid": "4420" 00:18:41.582 }, 00:18:41.582 "peer_address": { 00:18:41.582 "trtype": "TCP", 00:18:41.582 "adrfam": "IPv4", 00:18:41.582 "traddr": "10.0.0.1", 00:18:41.582 "trsvcid": "51100" 00:18:41.582 }, 00:18:41.582 "auth": { 00:18:41.582 "state": "completed", 00:18:41.582 "digest": "sha256", 00:18:41.582 "dhgroup": "null" 00:18:41.582 } 00:18:41.582 } 00:18:41.582 ]' 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.582 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.839 09:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:42.774 09:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.056 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.314 00:18:43.314 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.314 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.314 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.572 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.573 { 00:18:43.573 "cntlid": 7, 00:18:43.573 "qid": 0, 00:18:43.573 "state": "enabled", 00:18:43.573 "thread": "nvmf_tgt_poll_group_000", 00:18:43.573 "listen_address": { 00:18:43.573 "trtype": "TCP", 00:18:43.573 "adrfam": "IPv4", 00:18:43.573 "traddr": "10.0.0.2", 00:18:43.573 "trsvcid": "4420" 00:18:43.573 }, 00:18:43.573 "peer_address": { 00:18:43.573 "trtype": "TCP", 00:18:43.573 "adrfam": "IPv4", 00:18:43.573 "traddr": "10.0.0.1", 00:18:43.573 "trsvcid": "51130" 00:18:43.573 }, 00:18:43.573 "auth": { 00:18:43.573 "state": "completed", 00:18:43.573 "digest": "sha256", 00:18:43.573 "dhgroup": "null" 00:18:43.573 } 00:18:43.573 } 00:18:43.573 ]' 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.573 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.831 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.831 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.831 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.831 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.831 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.089 09:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.024 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.024 09:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.281 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.538 00:18:45.538 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.538 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.538 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.794 { 00:18:45.794 "cntlid": 9, 00:18:45.794 "qid": 0, 00:18:45.794 "state": "enabled", 00:18:45.794 "thread": "nvmf_tgt_poll_group_000", 00:18:45.794 "listen_address": { 00:18:45.794 "trtype": "TCP", 00:18:45.794 "adrfam": "IPv4", 00:18:45.794 "traddr": "10.0.0.2", 00:18:45.794 "trsvcid": "4420" 00:18:45.794 }, 00:18:45.794 "peer_address": { 00:18:45.794 "trtype": "TCP", 00:18:45.794 "adrfam": "IPv4", 00:18:45.794 "traddr": "10.0.0.1", 00:18:45.794 "trsvcid": "51148" 00:18:45.794 }, 00:18:45.794 "auth": { 00:18:45.794 "state": "completed", 00:18:45.794 "digest": "sha256", 00:18:45.794 "dhgroup": "ffdhe2048" 00:18:45.794 } 00:18:45.794 } 00:18:45.794 ]' 00:18:45.794 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.051 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.308 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.240 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.498 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.063 00:18:48.063 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:48.063 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.063 09:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:48.063 { 00:18:48.063 "cntlid": 11, 00:18:48.063 "qid": 0, 00:18:48.063 "state": "enabled", 00:18:48.063 "thread": "nvmf_tgt_poll_group_000", 00:18:48.063 "listen_address": { 00:18:48.063 "trtype": "TCP", 00:18:48.063 "adrfam": "IPv4", 00:18:48.063 "traddr": "10.0.0.2", 00:18:48.063 "trsvcid": "4420" 00:18:48.063 }, 00:18:48.063 "peer_address": { 00:18:48.063 "trtype": "TCP", 00:18:48.063 "adrfam": "IPv4", 00:18:48.063 "traddr": "10.0.0.1", 00:18:48.063 "trsvcid": "33864" 00:18:48.063 }, 00:18:48.063 "auth": { 00:18:48.063 "state": "completed", 00:18:48.063 "digest": "sha256", 00:18:48.063 "dhgroup": "ffdhe2048" 00:18:48.063 } 00:18:48.063 } 00:18:48.063 ]' 00:18:48.063 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.321 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.578 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.509 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.766 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.767 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.767 09:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.024 00:18:50.281 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.281 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.281 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.539 { 00:18:50.539 "cntlid": 13, 00:18:50.539 "qid": 0, 00:18:50.539 "state": "enabled", 00:18:50.539 "thread": "nvmf_tgt_poll_group_000", 00:18:50.539 "listen_address": { 00:18:50.539 "trtype": "TCP", 00:18:50.539 "adrfam": "IPv4", 00:18:50.539 "traddr": "10.0.0.2", 00:18:50.539 "trsvcid": "4420" 00:18:50.539 }, 00:18:50.539 "peer_address": { 00:18:50.539 "trtype": "TCP", 00:18:50.539 "adrfam": "IPv4", 00:18:50.539 "traddr": "10.0.0.1", 00:18:50.539 "trsvcid": "33882" 00:18:50.539 }, 00:18:50.539 "auth": { 00:18:50.539 "state": "completed", 00:18:50.539 "digest": "sha256", 00:18:50.539 "dhgroup": "ffdhe2048" 00:18:50.539 } 00:18:50.539 } 00:18:50.539 ]' 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.539 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.797 09:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.730 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:51.989 09:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:52.248 00:18:52.248 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.248 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.248 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.506 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.507 { 00:18:52.507 "cntlid": 15, 00:18:52.507 "qid": 0, 00:18:52.507 "state": "enabled", 00:18:52.507 "thread": "nvmf_tgt_poll_group_000", 00:18:52.507 "listen_address": { 00:18:52.507 "trtype": "TCP", 00:18:52.507 "adrfam": "IPv4", 00:18:52.507 "traddr": "10.0.0.2", 00:18:52.507 "trsvcid": "4420" 00:18:52.507 }, 00:18:52.507 "peer_address": { 00:18:52.507 "trtype": "TCP", 00:18:52.507 "adrfam": "IPv4", 00:18:52.507 "traddr": "10.0.0.1", 00:18:52.507 "trsvcid": "33906" 00:18:52.507 }, 00:18:52.507 "auth": { 00:18:52.507 "state": "completed", 00:18:52.507 "digest": "sha256", 00:18:52.507 "dhgroup": "ffdhe2048" 00:18:52.507 } 00:18:52.507 } 00:18:52.507 ]' 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.507 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.765 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.765 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.765 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.765 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.765 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.023 09:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:53.958 09:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.216 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:54.474 00:18:54.474 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.474 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.474 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.731 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.732 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.732 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.732 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.732 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.732 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.732 { 00:18:54.732 "cntlid": 17, 00:18:54.732 "qid": 0, 00:18:54.732 "state": "enabled", 00:18:54.732 "thread": "nvmf_tgt_poll_group_000", 00:18:54.732 "listen_address": { 00:18:54.732 "trtype": "TCP", 00:18:54.732 "adrfam": "IPv4", 00:18:54.732 "traddr": "10.0.0.2", 00:18:54.732 "trsvcid": "4420" 00:18:54.732 }, 00:18:54.732 "peer_address": { 00:18:54.732 "trtype": "TCP", 00:18:54.732 "adrfam": "IPv4", 00:18:54.732 "traddr": "10.0.0.1", 00:18:54.732 "trsvcid": "33930" 00:18:54.732 }, 00:18:54.732 "auth": { 00:18:54.732 "state": "completed", 00:18:54.732 "digest": "sha256", 00:18:54.732 "dhgroup": "ffdhe3072" 00:18:54.732 } 00:18:54.732 } 00:18:54.732 ]' 00:18:54.732 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.990 09:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.248 09:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.185 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.443 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:56.701 00:18:56.701 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.701 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.701 09:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.959 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.959 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.959 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.959 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.959 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.959 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.959 { 00:18:56.959 "cntlid": 19, 00:18:56.959 "qid": 0, 00:18:56.959 "state": "enabled", 00:18:56.959 "thread": "nvmf_tgt_poll_group_000", 00:18:56.959 "listen_address": { 00:18:56.959 "trtype": "TCP", 00:18:56.959 "adrfam": "IPv4", 00:18:56.959 "traddr": "10.0.0.2", 00:18:56.959 "trsvcid": "4420" 00:18:56.959 }, 00:18:56.959 "peer_address": { 00:18:56.959 "trtype": "TCP", 00:18:56.959 "adrfam": "IPv4", 00:18:56.959 "traddr": "10.0.0.1", 00:18:56.959 "trsvcid": "33972" 00:18:56.959 }, 00:18:56.959 "auth": { 00:18:56.959 "state": "completed", 00:18:56.959 "digest": "sha256", 00:18:56.959 "dhgroup": "ffdhe3072" 00:18:56.959 } 00:18:56.959 } 00:18:56.960 ]' 00:18:56.960 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.960 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.960 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.217 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:57.217 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.217 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.217 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.217 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.476 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.409 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.667 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.925 00:18:58.925 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.925 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.925 09:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.182 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.182 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.182 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.182 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.182 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.182 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.182 { 00:18:59.182 "cntlid": 21, 00:18:59.182 "qid": 0, 00:18:59.183 "state": "enabled", 00:18:59.183 "thread": "nvmf_tgt_poll_group_000", 00:18:59.183 "listen_address": { 00:18:59.183 "trtype": "TCP", 00:18:59.183 "adrfam": "IPv4", 00:18:59.183 "traddr": "10.0.0.2", 00:18:59.183 "trsvcid": "4420" 00:18:59.183 }, 00:18:59.183 "peer_address": { 00:18:59.183 "trtype": "TCP", 00:18:59.183 "adrfam": "IPv4", 00:18:59.183 "traddr": "10.0.0.1", 00:18:59.183 "trsvcid": "42382" 00:18:59.183 }, 00:18:59.183 "auth": { 00:18:59.183 "state": "completed", 00:18:59.183 "digest": "sha256", 00:18:59.183 "dhgroup": "ffdhe3072" 00:18:59.183 } 00:18:59.183 } 00:18:59.183 ]' 00:18:59.183 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.183 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.183 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.183 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.183 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.440 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.440 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.440 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.698 09:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.632 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.890 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:01.147 00:19:01.147 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.147 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.147 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.406 { 00:19:01.406 "cntlid": 23, 00:19:01.406 "qid": 0, 00:19:01.406 "state": "enabled", 00:19:01.406 "thread": "nvmf_tgt_poll_group_000", 00:19:01.406 "listen_address": { 00:19:01.406 "trtype": "TCP", 00:19:01.406 "adrfam": "IPv4", 00:19:01.406 "traddr": "10.0.0.2", 00:19:01.406 "trsvcid": "4420" 00:19:01.406 }, 00:19:01.406 "peer_address": { 00:19:01.406 "trtype": "TCP", 00:19:01.406 "adrfam": "IPv4", 00:19:01.406 "traddr": "10.0.0.1", 00:19:01.406 "trsvcid": "42406" 00:19:01.406 }, 00:19:01.406 "auth": { 00:19:01.406 "state": "completed", 00:19:01.406 "digest": "sha256", 00:19:01.406 "dhgroup": "ffdhe3072" 00:19:01.406 } 00:19:01.406 } 00:19:01.406 ]' 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.406 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.697 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.697 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.697 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.698 09:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.632 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.891 09:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:02.891 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.457 00:19:03.457 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.457 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.457 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.716 { 00:19:03.716 "cntlid": 25, 00:19:03.716 "qid": 0, 00:19:03.716 "state": "enabled", 00:19:03.716 "thread": "nvmf_tgt_poll_group_000", 00:19:03.716 "listen_address": { 00:19:03.716 "trtype": "TCP", 00:19:03.716 "adrfam": "IPv4", 00:19:03.716 "traddr": "10.0.0.2", 00:19:03.716 "trsvcid": "4420" 00:19:03.716 }, 00:19:03.716 "peer_address": { 00:19:03.716 "trtype": "TCP", 00:19:03.716 "adrfam": "IPv4", 00:19:03.716 "traddr": "10.0.0.1", 00:19:03.716 "trsvcid": "42438" 00:19:03.716 }, 00:19:03.716 "auth": { 00:19:03.716 "state": "completed", 00:19:03.716 "digest": "sha256", 00:19:03.716 "dhgroup": "ffdhe4096" 00:19:03.716 } 00:19:03.716 } 00:19:03.716 ]' 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.716 09:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.974 09:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.347 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:05.604 00:19:05.604 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:05.604 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:05.604 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.861 { 00:19:05.861 "cntlid": 27, 00:19:05.861 "qid": 0, 00:19:05.861 "state": "enabled", 00:19:05.861 "thread": "nvmf_tgt_poll_group_000", 00:19:05.861 "listen_address": { 00:19:05.861 "trtype": "TCP", 00:19:05.861 "adrfam": "IPv4", 00:19:05.861 "traddr": "10.0.0.2", 00:19:05.861 "trsvcid": "4420" 00:19:05.861 }, 00:19:05.861 "peer_address": { 00:19:05.861 "trtype": "TCP", 00:19:05.861 "adrfam": "IPv4", 00:19:05.861 "traddr": "10.0.0.1", 00:19:05.861 "trsvcid": "42456" 00:19:05.861 }, 00:19:05.861 "auth": { 00:19:05.861 "state": "completed", 00:19:05.861 "digest": "sha256", 00:19:05.861 "dhgroup": "ffdhe4096" 00:19:05.861 } 00:19:05.861 } 00:19:05.861 ]' 00:19:05.861 09:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.119 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.377 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.310 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.311 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.568 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:07.826 00:19:07.826 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.826 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.826 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.084 { 00:19:08.084 "cntlid": 29, 00:19:08.084 "qid": 0, 00:19:08.084 "state": "enabled", 00:19:08.084 "thread": "nvmf_tgt_poll_group_000", 00:19:08.084 "listen_address": { 00:19:08.084 "trtype": "TCP", 00:19:08.084 "adrfam": "IPv4", 00:19:08.084 "traddr": "10.0.0.2", 00:19:08.084 "trsvcid": "4420" 00:19:08.084 }, 00:19:08.084 "peer_address": { 00:19:08.084 "trtype": "TCP", 00:19:08.084 "adrfam": "IPv4", 00:19:08.084 "traddr": "10.0.0.1", 00:19:08.084 "trsvcid": "59320" 00:19:08.084 }, 00:19:08.084 "auth": { 00:19:08.084 "state": "completed", 00:19:08.084 "digest": "sha256", 00:19:08.084 "dhgroup": "ffdhe4096" 00:19:08.084 } 00:19:08.084 } 00:19:08.084 ]' 00:19:08.084 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.342 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.600 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.533 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:09.791 09:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.357 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.357 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.357 { 00:19:10.357 "cntlid": 31, 00:19:10.357 "qid": 0, 00:19:10.357 "state": "enabled", 00:19:10.357 "thread": "nvmf_tgt_poll_group_000", 00:19:10.357 "listen_address": { 00:19:10.357 "trtype": "TCP", 00:19:10.357 "adrfam": "IPv4", 00:19:10.357 "traddr": "10.0.0.2", 00:19:10.357 "trsvcid": "4420" 00:19:10.357 }, 00:19:10.357 "peer_address": { 00:19:10.357 "trtype": "TCP", 00:19:10.357 "adrfam": "IPv4", 00:19:10.357 "traddr": "10.0.0.1", 00:19:10.357 "trsvcid": "59354" 00:19:10.357 }, 00:19:10.357 "auth": { 00:19:10.357 "state": "completed", 00:19:10.357 "digest": "sha256", 00:19:10.357 "dhgroup": "ffdhe4096" 00:19:10.357 } 00:19:10.357 } 00:19:10.357 ]' 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.615 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.873 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:11.806 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.064 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.629 00:19:12.629 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.629 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.629 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.887 { 00:19:12.887 "cntlid": 33, 00:19:12.887 "qid": 0, 00:19:12.887 "state": "enabled", 00:19:12.887 "thread": "nvmf_tgt_poll_group_000", 00:19:12.887 "listen_address": { 00:19:12.887 "trtype": "TCP", 00:19:12.887 "adrfam": "IPv4", 00:19:12.887 "traddr": "10.0.0.2", 00:19:12.887 "trsvcid": "4420" 00:19:12.887 }, 00:19:12.887 "peer_address": { 00:19:12.887 "trtype": "TCP", 00:19:12.887 "adrfam": "IPv4", 00:19:12.887 "traddr": "10.0.0.1", 00:19:12.887 "trsvcid": "59370" 00:19:12.887 }, 00:19:12.887 "auth": { 00:19:12.887 "state": "completed", 00:19:12.887 "digest": "sha256", 00:19:12.887 "dhgroup": "ffdhe6144" 00:19:12.887 } 00:19:12.887 } 00:19:12.887 ]' 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.887 09:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.145 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.145 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.145 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.145 09:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.079 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.338 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.903 00:19:14.903 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.903 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.903 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.161 { 00:19:15.161 "cntlid": 35, 00:19:15.161 "qid": 0, 00:19:15.161 "state": "enabled", 00:19:15.161 "thread": "nvmf_tgt_poll_group_000", 00:19:15.161 "listen_address": { 00:19:15.161 "trtype": "TCP", 00:19:15.161 "adrfam": "IPv4", 00:19:15.161 "traddr": "10.0.0.2", 00:19:15.161 "trsvcid": "4420" 00:19:15.161 }, 00:19:15.161 "peer_address": { 00:19:15.161 "trtype": "TCP", 00:19:15.161 "adrfam": "IPv4", 00:19:15.161 "traddr": "10.0.0.1", 00:19:15.161 "trsvcid": "59390" 00:19:15.161 }, 00:19:15.161 "auth": { 00:19:15.161 "state": "completed", 00:19:15.161 "digest": "sha256", 00:19:15.161 "dhgroup": "ffdhe6144" 00:19:15.161 } 00:19:15.161 } 00:19:15.161 ]' 00:19:15.161 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.429 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.693 09:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:19:16.624 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.624 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:16.624 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.624 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.624 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.625 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.625 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.625 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:16.882 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:17.446 00:19:17.446 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.446 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.446 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.703 { 00:19:17.703 "cntlid": 37, 00:19:17.703 "qid": 0, 00:19:17.703 "state": "enabled", 00:19:17.703 "thread": "nvmf_tgt_poll_group_000", 00:19:17.703 "listen_address": { 00:19:17.703 "trtype": "TCP", 00:19:17.703 "adrfam": "IPv4", 00:19:17.703 "traddr": "10.0.0.2", 00:19:17.703 "trsvcid": "4420" 00:19:17.703 }, 00:19:17.703 "peer_address": { 00:19:17.703 "trtype": "TCP", 00:19:17.703 "adrfam": "IPv4", 00:19:17.703 "traddr": "10.0.0.1", 00:19:17.703 "trsvcid": "59414" 00:19:17.703 }, 00:19:17.703 "auth": { 00:19:17.703 "state": "completed", 00:19:17.703 "digest": "sha256", 00:19:17.703 "dhgroup": "ffdhe6144" 00:19:17.703 } 00:19:17.703 } 00:19:17.703 ]' 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.703 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.960 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:19.331 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.331 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.331 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.331 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.331 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.331 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.331 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.332 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:19.896 00:19:19.896 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.896 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.896 09:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.153 { 00:19:20.153 "cntlid": 39, 00:19:20.153 "qid": 0, 00:19:20.153 "state": "enabled", 00:19:20.153 "thread": "nvmf_tgt_poll_group_000", 00:19:20.153 "listen_address": { 00:19:20.153 "trtype": "TCP", 00:19:20.153 "adrfam": "IPv4", 00:19:20.153 "traddr": "10.0.0.2", 00:19:20.153 "trsvcid": "4420" 00:19:20.153 }, 00:19:20.153 "peer_address": { 00:19:20.153 "trtype": "TCP", 00:19:20.153 "adrfam": "IPv4", 00:19:20.153 "traddr": "10.0.0.1", 00:19:20.153 "trsvcid": "51352" 00:19:20.153 }, 00:19:20.153 "auth": { 00:19:20.153 "state": "completed", 00:19:20.153 "digest": "sha256", 00:19:20.153 "dhgroup": "ffdhe6144" 00:19:20.153 } 00:19:20.153 } 00:19:20.153 ]' 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:20.153 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.410 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.410 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.411 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.411 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.796 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:21.797 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.731 00:19:22.731 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.731 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.731 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.989 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.989 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.989 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.989 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.989 { 00:19:22.989 "cntlid": 41, 00:19:22.989 "qid": 0, 00:19:22.989 "state": "enabled", 00:19:22.989 "thread": "nvmf_tgt_poll_group_000", 00:19:22.989 "listen_address": { 00:19:22.989 "trtype": "TCP", 00:19:22.989 "adrfam": "IPv4", 00:19:22.989 "traddr": "10.0.0.2", 00:19:22.989 "trsvcid": "4420" 00:19:22.989 }, 00:19:22.989 "peer_address": { 00:19:22.989 "trtype": "TCP", 00:19:22.989 "adrfam": "IPv4", 00:19:22.989 "traddr": "10.0.0.1", 00:19:22.989 "trsvcid": "51388" 00:19:22.989 }, 00:19:22.989 "auth": { 00:19:22.989 "state": "completed", 00:19:22.989 "digest": "sha256", 00:19:22.989 "dhgroup": "ffdhe8192" 00:19:22.989 } 00:19:22.989 } 00:19:22.989 ]' 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:22.989 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.247 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.247 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.247 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.505 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.437 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.694 09:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:25.624 00:19:25.624 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.625 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.625 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.625 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.625 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.625 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.625 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.882 { 00:19:25.882 "cntlid": 43, 00:19:25.882 "qid": 0, 00:19:25.882 "state": "enabled", 00:19:25.882 "thread": "nvmf_tgt_poll_group_000", 00:19:25.882 "listen_address": { 00:19:25.882 "trtype": "TCP", 00:19:25.882 "adrfam": "IPv4", 00:19:25.882 "traddr": "10.0.0.2", 00:19:25.882 "trsvcid": "4420" 00:19:25.882 }, 00:19:25.882 "peer_address": { 00:19:25.882 "trtype": "TCP", 00:19:25.882 "adrfam": "IPv4", 00:19:25.882 "traddr": "10.0.0.1", 00:19:25.882 "trsvcid": "51426" 00:19:25.882 }, 00:19:25.882 "auth": { 00:19:25.882 "state": "completed", 00:19:25.882 "digest": "sha256", 00:19:25.882 "dhgroup": "ffdhe8192" 00:19:25.882 } 00:19:25.882 } 00:19:25.882 ]' 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.882 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.139 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.072 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:27.330 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:27.330 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.330 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:27.331 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:27.331 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:27.331 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.331 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.331 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.331 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.588 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.588 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:27.588 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.522 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.522 { 00:19:28.522 "cntlid": 45, 00:19:28.522 "qid": 0, 00:19:28.522 "state": "enabled", 00:19:28.522 "thread": "nvmf_tgt_poll_group_000", 00:19:28.522 "listen_address": { 00:19:28.522 "trtype": "TCP", 00:19:28.522 "adrfam": "IPv4", 00:19:28.522 "traddr": "10.0.0.2", 00:19:28.522 "trsvcid": "4420" 00:19:28.522 }, 00:19:28.522 "peer_address": { 00:19:28.522 "trtype": "TCP", 00:19:28.522 "adrfam": "IPv4", 00:19:28.522 "traddr": "10.0.0.1", 00:19:28.522 "trsvcid": "45516" 00:19:28.522 }, 00:19:28.522 "auth": { 00:19:28.522 "state": "completed", 00:19:28.522 "digest": "sha256", 00:19:28.522 "dhgroup": "ffdhe8192" 00:19:28.522 } 00:19:28.522 } 00:19:28.522 ]' 00:19:28.522 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.780 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.038 09:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:29.973 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:29.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:29.973 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.231 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:31.165 00:19:31.165 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.165 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.165 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.422 { 00:19:31.422 "cntlid": 47, 00:19:31.422 "qid": 0, 00:19:31.422 "state": "enabled", 00:19:31.422 "thread": "nvmf_tgt_poll_group_000", 00:19:31.422 "listen_address": { 00:19:31.422 "trtype": "TCP", 00:19:31.422 "adrfam": "IPv4", 00:19:31.422 "traddr": "10.0.0.2", 00:19:31.422 "trsvcid": "4420" 00:19:31.422 }, 00:19:31.422 "peer_address": { 00:19:31.422 "trtype": "TCP", 00:19:31.422 "adrfam": "IPv4", 00:19:31.422 "traddr": "10.0.0.1", 00:19:31.422 "trsvcid": "45542" 00:19:31.422 }, 00:19:31.422 "auth": { 00:19:31.422 "state": "completed", 00:19:31.422 "digest": "sha256", 00:19:31.422 "dhgroup": "ffdhe8192" 00:19:31.422 } 00:19:31.422 } 00:19:31.422 ]' 00:19:31.422 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.680 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.940 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.873 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:32.873 09:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.131 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.389 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.905 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.905 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.905 { 00:19:33.905 "cntlid": 49, 00:19:33.905 "qid": 0, 00:19:33.905 "state": "enabled", 00:19:33.905 "thread": "nvmf_tgt_poll_group_000", 00:19:33.905 "listen_address": { 00:19:33.905 "trtype": "TCP", 00:19:33.905 "adrfam": "IPv4", 00:19:33.905 "traddr": "10.0.0.2", 00:19:33.905 "trsvcid": "4420" 00:19:33.905 }, 00:19:33.905 "peer_address": { 00:19:33.905 "trtype": "TCP", 00:19:33.906 "adrfam": "IPv4", 00:19:33.906 "traddr": "10.0.0.1", 00:19:33.906 "trsvcid": "45574" 00:19:33.906 }, 00:19:33.906 "auth": { 00:19:33.906 "state": "completed", 00:19:33.906 "digest": "sha384", 00:19:33.906 "dhgroup": "null" 00:19:33.906 } 00:19:33.906 } 00:19:33.906 ]' 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.906 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.164 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.097 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.369 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.945 00:19:35.945 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.945 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.945 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.945 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.945 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.945 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.945 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.202 { 00:19:36.202 "cntlid": 51, 00:19:36.202 "qid": 0, 00:19:36.202 "state": "enabled", 00:19:36.202 "thread": "nvmf_tgt_poll_group_000", 00:19:36.202 "listen_address": { 00:19:36.202 "trtype": "TCP", 00:19:36.202 "adrfam": "IPv4", 00:19:36.202 "traddr": "10.0.0.2", 00:19:36.202 "trsvcid": "4420" 00:19:36.202 }, 00:19:36.202 "peer_address": { 00:19:36.202 "trtype": "TCP", 00:19:36.202 "adrfam": "IPv4", 00:19:36.202 "traddr": "10.0.0.1", 00:19:36.202 "trsvcid": "45608" 00:19:36.202 }, 00:19:36.202 "auth": { 00:19:36.202 "state": "completed", 00:19:36.202 "digest": "sha384", 00:19:36.202 "dhgroup": "null" 00:19:36.202 } 00:19:36.202 } 00:19:36.202 ]' 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.202 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.460 09:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:19:37.396 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.396 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:37.397 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.397 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.397 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.397 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.397 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.397 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.653 09:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.910 00:19:37.910 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.910 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.910 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.168 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.168 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.168 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.168 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.168 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.168 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.168 { 00:19:38.168 "cntlid": 53, 00:19:38.168 "qid": 0, 00:19:38.168 "state": "enabled", 00:19:38.168 "thread": "nvmf_tgt_poll_group_000", 00:19:38.168 "listen_address": { 00:19:38.168 "trtype": "TCP", 00:19:38.168 "adrfam": "IPv4", 00:19:38.168 "traddr": "10.0.0.2", 00:19:38.168 "trsvcid": "4420" 00:19:38.168 }, 00:19:38.168 "peer_address": { 00:19:38.168 "trtype": "TCP", 00:19:38.168 "adrfam": "IPv4", 00:19:38.168 "traddr": "10.0.0.1", 00:19:38.168 "trsvcid": "50652" 00:19:38.168 }, 00:19:38.168 "auth": { 00:19:38.168 "state": "completed", 00:19:38.168 "digest": "sha384", 00:19:38.168 "dhgroup": "null" 00:19:38.168 } 00:19:38.168 } 00:19:38.168 ]' 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.426 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.684 09:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.616 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.873 09:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.130 00:19:40.130 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.130 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.130 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.415 { 00:19:40.415 "cntlid": 55, 00:19:40.415 "qid": 0, 00:19:40.415 "state": "enabled", 00:19:40.415 "thread": "nvmf_tgt_poll_group_000", 00:19:40.415 "listen_address": { 00:19:40.415 "trtype": "TCP", 00:19:40.415 "adrfam": "IPv4", 00:19:40.415 "traddr": "10.0.0.2", 00:19:40.415 "trsvcid": "4420" 00:19:40.415 }, 00:19:40.415 "peer_address": { 00:19:40.415 "trtype": "TCP", 00:19:40.415 "adrfam": "IPv4", 00:19:40.415 "traddr": "10.0.0.1", 00:19:40.415 "trsvcid": "50662" 00:19:40.415 }, 00:19:40.415 "auth": { 00:19:40.415 "state": "completed", 00:19:40.415 "digest": "sha384", 00:19:40.415 "dhgroup": "null" 00:19:40.415 } 00:19:40.415 } 00:19:40.415 ]' 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.415 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.675 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.675 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.675 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.675 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.675 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.933 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:41.866 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:42.124 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:42.124 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.125 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.382 00:19:42.382 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.382 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.382 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.640 { 00:19:42.640 "cntlid": 57, 00:19:42.640 "qid": 0, 00:19:42.640 "state": "enabled", 00:19:42.640 "thread": "nvmf_tgt_poll_group_000", 00:19:42.640 "listen_address": { 00:19:42.640 "trtype": "TCP", 00:19:42.640 "adrfam": "IPv4", 00:19:42.640 "traddr": "10.0.0.2", 00:19:42.640 "trsvcid": "4420" 00:19:42.640 }, 00:19:42.640 "peer_address": { 00:19:42.640 "trtype": "TCP", 00:19:42.640 "adrfam": "IPv4", 00:19:42.640 "traddr": "10.0.0.1", 00:19:42.640 "trsvcid": "50690" 00:19:42.640 }, 00:19:42.640 "auth": { 00:19:42.640 "state": "completed", 00:19:42.640 "digest": "sha384", 00:19:42.640 "dhgroup": "ffdhe2048" 00:19:42.640 } 00:19:42.640 } 00:19:42.640 ]' 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:42.640 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.898 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.898 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.898 09:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.898 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:19:44.271 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.271 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.271 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.271 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.271 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:44.529 00:19:44.529 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.529 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.529 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.786 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.786 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.786 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.786 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.786 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.786 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.786 { 00:19:44.786 "cntlid": 59, 00:19:44.786 "qid": 0, 00:19:44.786 "state": "enabled", 00:19:44.787 "thread": "nvmf_tgt_poll_group_000", 00:19:44.787 "listen_address": { 00:19:44.787 "trtype": "TCP", 00:19:44.787 "adrfam": "IPv4", 00:19:44.787 "traddr": "10.0.0.2", 00:19:44.787 "trsvcid": "4420" 00:19:44.787 }, 00:19:44.787 "peer_address": { 00:19:44.787 "trtype": "TCP", 00:19:44.787 "adrfam": "IPv4", 00:19:44.787 "traddr": "10.0.0.1", 00:19:44.787 "trsvcid": "50730" 00:19:44.787 }, 00:19:44.787 "auth": { 00:19:44.787 "state": "completed", 00:19:44.787 "digest": "sha384", 00:19:44.787 "dhgroup": "ffdhe2048" 00:19:44.787 } 00:19:44.787 } 00:19:44.787 ]' 00:19:44.787 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.787 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.787 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.045 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:45.045 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.045 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.045 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.045 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.303 09:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.235 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.492 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:46.750 00:19:46.750 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.750 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.750 09:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.008 { 00:19:47.008 "cntlid": 61, 00:19:47.008 "qid": 0, 00:19:47.008 "state": "enabled", 00:19:47.008 "thread": "nvmf_tgt_poll_group_000", 00:19:47.008 "listen_address": { 00:19:47.008 "trtype": "TCP", 00:19:47.008 "adrfam": "IPv4", 00:19:47.008 "traddr": "10.0.0.2", 00:19:47.008 "trsvcid": "4420" 00:19:47.008 }, 00:19:47.008 "peer_address": { 00:19:47.008 "trtype": "TCP", 00:19:47.008 "adrfam": "IPv4", 00:19:47.008 "traddr": "10.0.0.1", 00:19:47.008 "trsvcid": "50766" 00:19:47.008 }, 00:19:47.008 "auth": { 00:19:47.008 "state": "completed", 00:19:47.008 "digest": "sha384", 00:19:47.008 "dhgroup": "ffdhe2048" 00:19:47.008 } 00:19:47.008 } 00:19:47.008 ]' 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:47.008 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.266 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:47.266 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.266 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.266 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.266 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.524 09:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.457 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.715 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.973 00:19:48.973 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.973 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.973 09:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.231 { 00:19:49.231 "cntlid": 63, 00:19:49.231 "qid": 0, 00:19:49.231 "state": "enabled", 00:19:49.231 "thread": "nvmf_tgt_poll_group_000", 00:19:49.231 "listen_address": { 00:19:49.231 "trtype": "TCP", 00:19:49.231 "adrfam": "IPv4", 00:19:49.231 "traddr": "10.0.0.2", 00:19:49.231 "trsvcid": "4420" 00:19:49.231 }, 00:19:49.231 "peer_address": { 00:19:49.231 "trtype": "TCP", 00:19:49.231 "adrfam": "IPv4", 00:19:49.231 "traddr": "10.0.0.1", 00:19:49.231 "trsvcid": "49964" 00:19:49.231 }, 00:19:49.231 "auth": { 00:19:49.231 "state": "completed", 00:19:49.231 "digest": "sha384", 00:19:49.231 "dhgroup": "ffdhe2048" 00:19:49.231 } 00:19:49.231 } 00:19:49.231 ]' 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.231 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.489 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:50.421 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.679 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.937 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.937 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.195 00:19:51.195 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.195 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.195 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.452 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.452 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.452 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.452 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.452 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.452 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.452 { 00:19:51.452 "cntlid": 65, 00:19:51.452 "qid": 0, 00:19:51.452 "state": "enabled", 00:19:51.452 "thread": "nvmf_tgt_poll_group_000", 00:19:51.452 "listen_address": { 00:19:51.452 "trtype": "TCP", 00:19:51.452 "adrfam": "IPv4", 00:19:51.452 "traddr": "10.0.0.2", 00:19:51.452 "trsvcid": "4420" 00:19:51.452 }, 00:19:51.452 "peer_address": { 00:19:51.452 "trtype": "TCP", 00:19:51.452 "adrfam": "IPv4", 00:19:51.452 "traddr": "10.0.0.1", 00:19:51.452 "trsvcid": "49976" 00:19:51.452 }, 00:19:51.453 "auth": { 00:19:51.453 "state": "completed", 00:19:51.453 "digest": "sha384", 00:19:51.453 "dhgroup": "ffdhe3072" 00:19:51.453 } 00:19:51.453 } 00:19:51.453 ]' 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.453 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.710 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.643 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.901 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.465 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.465 { 00:19:53.465 "cntlid": 67, 00:19:53.465 "qid": 0, 00:19:53.465 "state": "enabled", 00:19:53.465 "thread": "nvmf_tgt_poll_group_000", 00:19:53.465 "listen_address": { 00:19:53.465 "trtype": "TCP", 00:19:53.465 "adrfam": "IPv4", 00:19:53.465 "traddr": "10.0.0.2", 00:19:53.465 "trsvcid": "4420" 00:19:53.465 }, 00:19:53.465 "peer_address": { 00:19:53.465 "trtype": "TCP", 00:19:53.465 "adrfam": "IPv4", 00:19:53.465 "traddr": "10.0.0.1", 00:19:53.465 "trsvcid": "49992" 00:19:53.465 }, 00:19:53.465 "auth": { 00:19:53.465 "state": "completed", 00:19:53.465 "digest": "sha384", 00:19:53.465 "dhgroup": "ffdhe3072" 00:19:53.465 } 00:19:53.465 } 00:19:53.465 ]' 00:19:53.465 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.721 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.977 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:54.909 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.167 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.429 00:19:55.429 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.429 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.429 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.694 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.694 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.694 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.695 { 00:19:55.695 "cntlid": 69, 00:19:55.695 "qid": 0, 00:19:55.695 "state": "enabled", 00:19:55.695 "thread": "nvmf_tgt_poll_group_000", 00:19:55.695 "listen_address": { 00:19:55.695 "trtype": "TCP", 00:19:55.695 "adrfam": "IPv4", 00:19:55.695 "traddr": "10.0.0.2", 00:19:55.695 "trsvcid": "4420" 00:19:55.695 }, 00:19:55.695 "peer_address": { 00:19:55.695 "trtype": "TCP", 00:19:55.695 "adrfam": "IPv4", 00:19:55.695 "traddr": "10.0.0.1", 00:19:55.695 "trsvcid": "50010" 00:19:55.695 }, 00:19:55.695 "auth": { 00:19:55.695 "state": "completed", 00:19:55.695 "digest": "sha384", 00:19:55.695 "dhgroup": "ffdhe3072" 00:19:55.695 } 00:19:55.695 } 00:19:55.695 ]' 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:55.695 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.951 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.951 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.951 09:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.208 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.140 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.397 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.654 00:19:57.654 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.654 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.654 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.911 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.911 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.911 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.911 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.911 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.911 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.911 { 00:19:57.911 "cntlid": 71, 00:19:57.911 "qid": 0, 00:19:57.911 "state": "enabled", 00:19:57.911 "thread": "nvmf_tgt_poll_group_000", 00:19:57.912 "listen_address": { 00:19:57.912 "trtype": "TCP", 00:19:57.912 "adrfam": "IPv4", 00:19:57.912 "traddr": "10.0.0.2", 00:19:57.912 "trsvcid": "4420" 00:19:57.912 }, 00:19:57.912 "peer_address": { 00:19:57.912 "trtype": "TCP", 00:19:57.912 "adrfam": "IPv4", 00:19:57.912 "traddr": "10.0.0.1", 00:19:57.912 "trsvcid": "50036" 00:19:57.912 }, 00:19:57.912 "auth": { 00:19:57.912 "state": "completed", 00:19:57.912 "digest": "sha384", 00:19:57.912 "dhgroup": "ffdhe3072" 00:19:57.912 } 00:19:57.912 } 00:19:57.912 ]' 00:19:57.912 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.912 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:57.912 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.912 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.912 09:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.169 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.169 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.169 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.427 09:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.406 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.406 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:59.664 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:59.664 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.664 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:59.664 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:59.664 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.664 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.665 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.665 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.665 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.665 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.665 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.665 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.922 00:19:59.922 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.922 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.922 09:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.181 { 00:20:00.181 "cntlid": 73, 00:20:00.181 "qid": 0, 00:20:00.181 "state": "enabled", 00:20:00.181 "thread": "nvmf_tgt_poll_group_000", 00:20:00.181 "listen_address": { 00:20:00.181 "trtype": "TCP", 00:20:00.181 "adrfam": "IPv4", 00:20:00.181 "traddr": "10.0.0.2", 00:20:00.181 "trsvcid": "4420" 00:20:00.181 }, 00:20:00.181 "peer_address": { 00:20:00.181 "trtype": "TCP", 00:20:00.181 "adrfam": "IPv4", 00:20:00.181 "traddr": "10.0.0.1", 00:20:00.181 "trsvcid": "46384" 00:20:00.181 }, 00:20:00.181 "auth": { 00:20:00.181 "state": "completed", 00:20:00.181 "digest": "sha384", 00:20:00.181 "dhgroup": "ffdhe4096" 00:20:00.181 } 00:20:00.181 } 00:20:00.181 ]' 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.181 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.439 09:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.372 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.630 09:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.197 00:20:02.197 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.197 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.197 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.455 { 00:20:02.455 "cntlid": 75, 00:20:02.455 "qid": 0, 00:20:02.455 "state": "enabled", 00:20:02.455 "thread": "nvmf_tgt_poll_group_000", 00:20:02.455 "listen_address": { 00:20:02.455 "trtype": "TCP", 00:20:02.455 "adrfam": "IPv4", 00:20:02.455 "traddr": "10.0.0.2", 00:20:02.455 "trsvcid": "4420" 00:20:02.455 }, 00:20:02.455 "peer_address": { 00:20:02.455 "trtype": "TCP", 00:20:02.455 "adrfam": "IPv4", 00:20:02.455 "traddr": "10.0.0.1", 00:20:02.455 "trsvcid": "46408" 00:20:02.455 }, 00:20:02.455 "auth": { 00:20:02.455 "state": "completed", 00:20:02.455 "digest": "sha384", 00:20:02.455 "dhgroup": "ffdhe4096" 00:20:02.455 } 00:20:02.455 } 00:20:02.455 ]' 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.455 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.713 09:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.645 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.904 09:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.904 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.904 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.904 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.469 00:20:04.469 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.469 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.469 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.727 { 00:20:04.727 "cntlid": 77, 00:20:04.727 "qid": 0, 00:20:04.727 "state": "enabled", 00:20:04.727 "thread": "nvmf_tgt_poll_group_000", 00:20:04.727 "listen_address": { 00:20:04.727 "trtype": "TCP", 00:20:04.727 "adrfam": "IPv4", 00:20:04.727 "traddr": "10.0.0.2", 00:20:04.727 "trsvcid": "4420" 00:20:04.727 }, 00:20:04.727 "peer_address": { 00:20:04.727 "trtype": "TCP", 00:20:04.727 "adrfam": "IPv4", 00:20:04.727 "traddr": "10.0.0.1", 00:20:04.727 "trsvcid": "46446" 00:20:04.727 }, 00:20:04.727 "auth": { 00:20:04.727 "state": "completed", 00:20:04.727 "digest": "sha384", 00:20:04.727 "dhgroup": "ffdhe4096" 00:20:04.727 } 00:20:04.727 } 00:20:04.727 ]' 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.727 09:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.986 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.919 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:05.920 09:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.178 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.744 00:20:06.744 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.744 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.744 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.002 { 00:20:07.002 "cntlid": 79, 00:20:07.002 "qid": 0, 00:20:07.002 "state": "enabled", 00:20:07.002 "thread": "nvmf_tgt_poll_group_000", 00:20:07.002 "listen_address": { 00:20:07.002 "trtype": "TCP", 00:20:07.002 "adrfam": "IPv4", 00:20:07.002 "traddr": "10.0.0.2", 00:20:07.002 "trsvcid": "4420" 00:20:07.002 }, 00:20:07.002 "peer_address": { 00:20:07.002 "trtype": "TCP", 00:20:07.002 "adrfam": "IPv4", 00:20:07.002 "traddr": "10.0.0.1", 00:20:07.002 "trsvcid": "46478" 00:20:07.002 }, 00:20:07.002 "auth": { 00:20:07.002 "state": "completed", 00:20:07.002 "digest": "sha384", 00:20:07.002 "dhgroup": "ffdhe4096" 00:20:07.002 } 00:20:07.002 } 00:20:07.002 ]' 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.002 09:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.261 09:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.192 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.451 09:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.015 00:20:09.015 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.015 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.015 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.273 { 00:20:09.273 "cntlid": 81, 00:20:09.273 "qid": 0, 00:20:09.273 "state": "enabled", 00:20:09.273 "thread": "nvmf_tgt_poll_group_000", 00:20:09.273 "listen_address": { 00:20:09.273 "trtype": "TCP", 00:20:09.273 "adrfam": "IPv4", 00:20:09.273 "traddr": "10.0.0.2", 00:20:09.273 "trsvcid": "4420" 00:20:09.273 }, 00:20:09.273 "peer_address": { 00:20:09.273 "trtype": "TCP", 00:20:09.273 "adrfam": "IPv4", 00:20:09.273 "traddr": "10.0.0.1", 00:20:09.273 "trsvcid": "41390" 00:20:09.273 }, 00:20:09.273 "auth": { 00:20:09.273 "state": "completed", 00:20:09.273 "digest": "sha384", 00:20:09.273 "dhgroup": "ffdhe6144" 00:20:09.273 } 00:20:09.273 } 00:20:09.273 ]' 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.273 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.530 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.530 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.530 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.786 09:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.718 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.976 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.976 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.976 09:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.540 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.540 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.540 { 00:20:11.540 "cntlid": 83, 00:20:11.540 "qid": 0, 00:20:11.540 "state": "enabled", 00:20:11.540 "thread": "nvmf_tgt_poll_group_000", 00:20:11.540 "listen_address": { 00:20:11.540 "trtype": "TCP", 00:20:11.540 "adrfam": "IPv4", 00:20:11.540 "traddr": "10.0.0.2", 00:20:11.540 "trsvcid": "4420" 00:20:11.540 }, 00:20:11.540 "peer_address": { 00:20:11.540 "trtype": "TCP", 00:20:11.541 "adrfam": "IPv4", 00:20:11.541 "traddr": "10.0.0.1", 00:20:11.541 "trsvcid": "41424" 00:20:11.541 }, 00:20:11.541 "auth": { 00:20:11.541 "state": "completed", 00:20:11.541 "digest": "sha384", 00:20:11.541 "dhgroup": "ffdhe6144" 00:20:11.541 } 00:20:11.541 } 00:20:11.541 ]' 00:20:11.541 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.799 09:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.056 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:12.987 09:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:13.244 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:13.244 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.244 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.245 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.809 00:20:13.809 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.809 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.809 09:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.067 { 00:20:14.067 "cntlid": 85, 00:20:14.067 "qid": 0, 00:20:14.067 "state": "enabled", 00:20:14.067 "thread": "nvmf_tgt_poll_group_000", 00:20:14.067 "listen_address": { 00:20:14.067 "trtype": "TCP", 00:20:14.067 "adrfam": "IPv4", 00:20:14.067 "traddr": "10.0.0.2", 00:20:14.067 "trsvcid": "4420" 00:20:14.067 }, 00:20:14.067 "peer_address": { 00:20:14.067 "trtype": "TCP", 00:20:14.067 "adrfam": "IPv4", 00:20:14.067 "traddr": "10.0.0.1", 00:20:14.067 "trsvcid": "41452" 00:20:14.067 }, 00:20:14.067 "auth": { 00:20:14.067 "state": "completed", 00:20:14.067 "digest": "sha384", 00:20:14.067 "dhgroup": "ffdhe6144" 00:20:14.067 } 00:20:14.067 } 00:20:14.067 ]' 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.067 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.325 09:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.694 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:15.694 09:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.257 00:20:16.257 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.258 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.258 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.623 { 00:20:16.623 "cntlid": 87, 00:20:16.623 "qid": 0, 00:20:16.623 "state": "enabled", 00:20:16.623 "thread": "nvmf_tgt_poll_group_000", 00:20:16.623 "listen_address": { 00:20:16.623 "trtype": "TCP", 00:20:16.623 "adrfam": "IPv4", 00:20:16.623 "traddr": "10.0.0.2", 00:20:16.623 "trsvcid": "4420" 00:20:16.623 }, 00:20:16.623 "peer_address": { 00:20:16.623 "trtype": "TCP", 00:20:16.623 "adrfam": "IPv4", 00:20:16.623 "traddr": "10.0.0.1", 00:20:16.623 "trsvcid": "41474" 00:20:16.623 }, 00:20:16.623 "auth": { 00:20:16.623 "state": "completed", 00:20:16.623 "digest": "sha384", 00:20:16.623 "dhgroup": "ffdhe6144" 00:20:16.623 } 00:20:16.623 } 00:20:16.623 ]' 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.623 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.908 09:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:17.840 09:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.098 09:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.029 00:20:19.029 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.029 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.029 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.292 { 00:20:19.292 "cntlid": 89, 00:20:19.292 "qid": 0, 00:20:19.292 "state": "enabled", 00:20:19.292 "thread": "nvmf_tgt_poll_group_000", 00:20:19.292 "listen_address": { 00:20:19.292 "trtype": "TCP", 00:20:19.292 "adrfam": "IPv4", 00:20:19.292 "traddr": "10.0.0.2", 00:20:19.292 "trsvcid": "4420" 00:20:19.292 }, 00:20:19.292 "peer_address": { 00:20:19.292 "trtype": "TCP", 00:20:19.292 "adrfam": "IPv4", 00:20:19.292 "traddr": "10.0.0.1", 00:20:19.292 "trsvcid": "60464" 00:20:19.292 }, 00:20:19.292 "auth": { 00:20:19.292 "state": "completed", 00:20:19.292 "digest": "sha384", 00:20:19.292 "dhgroup": "ffdhe8192" 00:20:19.292 } 00:20:19.292 } 00:20:19.292 ]' 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:19.292 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.549 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.549 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.549 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.807 09:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:20.743 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.743 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.743 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.743 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.743 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.743 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.743 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.744 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:20.744 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.001 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.002 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.002 09:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.933 00:20:21.933 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.933 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.933 09:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.191 { 00:20:22.191 "cntlid": 91, 00:20:22.191 "qid": 0, 00:20:22.191 "state": "enabled", 00:20:22.191 "thread": "nvmf_tgt_poll_group_000", 00:20:22.191 "listen_address": { 00:20:22.191 "trtype": "TCP", 00:20:22.191 "adrfam": "IPv4", 00:20:22.191 "traddr": "10.0.0.2", 00:20:22.191 "trsvcid": "4420" 00:20:22.191 }, 00:20:22.191 "peer_address": { 00:20:22.191 "trtype": "TCP", 00:20:22.191 "adrfam": "IPv4", 00:20:22.191 "traddr": "10.0.0.1", 00:20:22.191 "trsvcid": "60496" 00:20:22.191 }, 00:20:22.191 "auth": { 00:20:22.191 "state": "completed", 00:20:22.191 "digest": "sha384", 00:20:22.191 "dhgroup": "ffdhe8192" 00:20:22.191 } 00:20:22.191 } 00:20:22.191 ]' 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.191 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.449 09:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.382 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:23.640 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.641 09:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.574 00:20:24.574 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.574 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.574 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:24.832 { 00:20:24.832 "cntlid": 93, 00:20:24.832 "qid": 0, 00:20:24.832 "state": "enabled", 00:20:24.832 "thread": "nvmf_tgt_poll_group_000", 00:20:24.832 "listen_address": { 00:20:24.832 "trtype": "TCP", 00:20:24.832 "adrfam": "IPv4", 00:20:24.832 "traddr": "10.0.0.2", 00:20:24.832 "trsvcid": "4420" 00:20:24.832 }, 00:20:24.832 "peer_address": { 00:20:24.832 "trtype": "TCP", 00:20:24.832 "adrfam": "IPv4", 00:20:24.832 "traddr": "10.0.0.1", 00:20:24.832 "trsvcid": "60526" 00:20:24.832 }, 00:20:24.832 "auth": { 00:20:24.832 "state": "completed", 00:20:24.832 "digest": "sha384", 00:20:24.832 "dhgroup": "ffdhe8192" 00:20:24.832 } 00:20:24.832 } 00:20:24.832 ]' 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.832 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.090 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.090 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.090 09:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.090 09:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.463 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.463 09:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.395 00:20:27.395 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.395 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.395 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.652 { 00:20:27.652 "cntlid": 95, 00:20:27.652 "qid": 0, 00:20:27.652 "state": "enabled", 00:20:27.652 "thread": "nvmf_tgt_poll_group_000", 00:20:27.652 "listen_address": { 00:20:27.652 "trtype": "TCP", 00:20:27.652 "adrfam": "IPv4", 00:20:27.652 "traddr": "10.0.0.2", 00:20:27.652 "trsvcid": "4420" 00:20:27.652 }, 00:20:27.652 "peer_address": { 00:20:27.652 "trtype": "TCP", 00:20:27.652 "adrfam": "IPv4", 00:20:27.652 "traddr": "10.0.0.1", 00:20:27.652 "trsvcid": "60560" 00:20:27.652 }, 00:20:27.652 "auth": { 00:20:27.652 "state": "completed", 00:20:27.652 "digest": "sha384", 00:20:27.652 "dhgroup": "ffdhe8192" 00:20:27.652 } 00:20:27.652 } 00:20:27.652 ]' 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.652 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.910 09:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:28.842 09:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.100 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.666 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.666 { 00:20:29.666 "cntlid": 97, 00:20:29.666 "qid": 0, 00:20:29.666 "state": "enabled", 00:20:29.666 "thread": "nvmf_tgt_poll_group_000", 00:20:29.666 "listen_address": { 00:20:29.666 "trtype": "TCP", 00:20:29.666 "adrfam": "IPv4", 00:20:29.666 "traddr": "10.0.0.2", 00:20:29.666 "trsvcid": "4420" 00:20:29.666 }, 00:20:29.666 "peer_address": { 00:20:29.666 "trtype": "TCP", 00:20:29.666 "adrfam": "IPv4", 00:20:29.666 "traddr": "10.0.0.1", 00:20:29.666 "trsvcid": "42332" 00:20:29.666 }, 00:20:29.666 "auth": { 00:20:29.666 "state": "completed", 00:20:29.666 "digest": "sha512", 00:20:29.666 "dhgroup": "null" 00:20:29.666 } 00:20:29.666 } 00:20:29.666 ]' 00:20:29.666 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.923 09:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.181 09:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.113 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.370 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.628 00:20:31.628 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:31.628 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:31.628 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.885 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.885 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.885 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.885 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.885 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.885 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:31.885 { 00:20:31.885 "cntlid": 99, 00:20:31.885 "qid": 0, 00:20:31.885 "state": "enabled", 00:20:31.885 "thread": "nvmf_tgt_poll_group_000", 00:20:31.885 "listen_address": { 00:20:31.885 "trtype": "TCP", 00:20:31.885 "adrfam": "IPv4", 00:20:31.885 "traddr": "10.0.0.2", 00:20:31.885 "trsvcid": "4420" 00:20:31.885 }, 00:20:31.886 "peer_address": { 00:20:31.886 "trtype": "TCP", 00:20:31.886 "adrfam": "IPv4", 00:20:31.886 "traddr": "10.0.0.1", 00:20:31.886 "trsvcid": "42356" 00:20:31.886 }, 00:20:31.886 "auth": { 00:20:31.886 "state": "completed", 00:20:31.886 "digest": "sha512", 00:20:31.886 "dhgroup": "null" 00:20:31.886 } 00:20:31.886 } 00:20:31.886 ]' 00:20:31.886 09:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.143 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:32.143 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.144 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:32.144 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.144 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.144 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.144 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.401 09:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.334 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.592 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.158 00:20:34.158 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.158 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.158 09:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.158 { 00:20:34.158 "cntlid": 101, 00:20:34.158 "qid": 0, 00:20:34.158 "state": "enabled", 00:20:34.158 "thread": "nvmf_tgt_poll_group_000", 00:20:34.158 "listen_address": { 00:20:34.158 "trtype": "TCP", 00:20:34.158 "adrfam": "IPv4", 00:20:34.158 "traddr": "10.0.0.2", 00:20:34.158 "trsvcid": "4420" 00:20:34.158 }, 00:20:34.158 "peer_address": { 00:20:34.158 "trtype": "TCP", 00:20:34.158 "adrfam": "IPv4", 00:20:34.158 "traddr": "10.0.0.1", 00:20:34.158 "trsvcid": "42378" 00:20:34.158 }, 00:20:34.158 "auth": { 00:20:34.158 "state": "completed", 00:20:34.158 "digest": "sha512", 00:20:34.158 "dhgroup": "null" 00:20:34.158 } 00:20:34.158 } 00:20:34.158 ]' 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:34.158 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.417 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:34.417 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.417 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.417 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.417 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.703 09:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.640 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:35.897 09:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.153 00:20:36.153 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.153 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.153 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:36.409 { 00:20:36.409 "cntlid": 103, 00:20:36.409 "qid": 0, 00:20:36.409 "state": "enabled", 00:20:36.409 "thread": "nvmf_tgt_poll_group_000", 00:20:36.409 "listen_address": { 00:20:36.409 "trtype": "TCP", 00:20:36.409 "adrfam": "IPv4", 00:20:36.409 "traddr": "10.0.0.2", 00:20:36.409 "trsvcid": "4420" 00:20:36.409 }, 00:20:36.409 "peer_address": { 00:20:36.409 "trtype": "TCP", 00:20:36.409 "adrfam": "IPv4", 00:20:36.409 "traddr": "10.0.0.1", 00:20:36.409 "trsvcid": "42414" 00:20:36.409 }, 00:20:36.409 "auth": { 00:20:36.409 "state": "completed", 00:20:36.409 "digest": "sha512", 00:20:36.409 "dhgroup": "null" 00:20:36.409 } 00:20:36.409 } 00:20:36.409 ]' 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:36.409 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:36.667 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.667 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.667 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.925 09:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:20:37.857 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:37.858 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.116 09:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.373 00:20:38.373 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:38.373 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:38.373 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:38.631 { 00:20:38.631 "cntlid": 105, 00:20:38.631 "qid": 0, 00:20:38.631 "state": "enabled", 00:20:38.631 "thread": "nvmf_tgt_poll_group_000", 00:20:38.631 "listen_address": { 00:20:38.631 "trtype": "TCP", 00:20:38.631 "adrfam": "IPv4", 00:20:38.631 "traddr": "10.0.0.2", 00:20:38.631 "trsvcid": "4420" 00:20:38.631 }, 00:20:38.631 "peer_address": { 00:20:38.631 "trtype": "TCP", 00:20:38.631 "adrfam": "IPv4", 00:20:38.631 "traddr": "10.0.0.1", 00:20:38.631 "trsvcid": "41228" 00:20:38.631 }, 00:20:38.631 "auth": { 00:20:38.631 "state": "completed", 00:20:38.631 "digest": "sha512", 00:20:38.631 "dhgroup": "ffdhe2048" 00:20:38.631 } 00:20:38.631 } 00:20:38.631 ]' 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.631 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.888 09:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.261 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.261 09:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.261 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.518 00:20:40.518 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:40.518 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:40.518 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:40.774 { 00:20:40.774 "cntlid": 107, 00:20:40.774 "qid": 0, 00:20:40.774 "state": "enabled", 00:20:40.774 "thread": "nvmf_tgt_poll_group_000", 00:20:40.774 "listen_address": { 00:20:40.774 "trtype": "TCP", 00:20:40.774 "adrfam": "IPv4", 00:20:40.774 "traddr": "10.0.0.2", 00:20:40.774 "trsvcid": "4420" 00:20:40.774 }, 00:20:40.774 "peer_address": { 00:20:40.774 "trtype": "TCP", 00:20:40.774 "adrfam": "IPv4", 00:20:40.774 "traddr": "10.0.0.1", 00:20:40.774 "trsvcid": "41246" 00:20:40.774 }, 00:20:40.774 "auth": { 00:20:40.774 "state": "completed", 00:20:40.774 "digest": "sha512", 00:20:40.774 "dhgroup": "ffdhe2048" 00:20:40.774 } 00:20:40.774 } 00:20:40.774 ]' 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.774 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.030 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.031 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.031 09:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.287 09:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.220 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.220 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.221 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.787 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.787 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.077 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:43.077 { 00:20:43.077 "cntlid": 109, 00:20:43.077 "qid": 0, 00:20:43.077 "state": "enabled", 00:20:43.077 "thread": "nvmf_tgt_poll_group_000", 00:20:43.077 "listen_address": { 00:20:43.077 "trtype": "TCP", 00:20:43.077 "adrfam": "IPv4", 00:20:43.077 "traddr": "10.0.0.2", 00:20:43.077 "trsvcid": "4420" 00:20:43.077 }, 00:20:43.077 "peer_address": { 00:20:43.077 "trtype": "TCP", 00:20:43.077 "adrfam": "IPv4", 00:20:43.077 "traddr": "10.0.0.1", 00:20:43.077 "trsvcid": "41282" 00:20:43.077 }, 00:20:43.077 "auth": { 00:20:43.077 "state": "completed", 00:20:43.077 "digest": "sha512", 00:20:43.077 "dhgroup": "ffdhe2048" 00:20:43.077 } 00:20:43.077 } 00:20:43.077 ]' 00:20:43.077 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:43.077 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:43.077 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:43.077 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.077 09:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:43.077 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.077 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.077 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.335 09:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.265 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.265 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.523 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:44.781 00:20:44.781 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:44.781 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.781 09:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.039 { 00:20:45.039 "cntlid": 111, 00:20:45.039 "qid": 0, 00:20:45.039 "state": "enabled", 00:20:45.039 "thread": "nvmf_tgt_poll_group_000", 00:20:45.039 "listen_address": { 00:20:45.039 "trtype": "TCP", 00:20:45.039 "adrfam": "IPv4", 00:20:45.039 "traddr": "10.0.0.2", 00:20:45.039 "trsvcid": "4420" 00:20:45.039 }, 00:20:45.039 "peer_address": { 00:20:45.039 "trtype": "TCP", 00:20:45.039 "adrfam": "IPv4", 00:20:45.039 "traddr": "10.0.0.1", 00:20:45.039 "trsvcid": "41310" 00:20:45.039 }, 00:20:45.039 "auth": { 00:20:45.039 "state": "completed", 00:20:45.039 "digest": "sha512", 00:20:45.039 "dhgroup": "ffdhe2048" 00:20:45.039 } 00:20:45.039 } 00:20:45.039 ]' 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.039 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.296 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:45.296 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.296 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.296 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.296 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.553 09:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.485 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.743 09:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.000 00:20:47.000 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.000 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.000 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:47.258 { 00:20:47.258 "cntlid": 113, 00:20:47.258 "qid": 0, 00:20:47.258 "state": "enabled", 00:20:47.258 "thread": "nvmf_tgt_poll_group_000", 00:20:47.258 "listen_address": { 00:20:47.258 "trtype": "TCP", 00:20:47.258 "adrfam": "IPv4", 00:20:47.258 "traddr": "10.0.0.2", 00:20:47.258 "trsvcid": "4420" 00:20:47.258 }, 00:20:47.258 "peer_address": { 00:20:47.258 "trtype": "TCP", 00:20:47.258 "adrfam": "IPv4", 00:20:47.258 "traddr": "10.0.0.1", 00:20:47.258 "trsvcid": "41338" 00:20:47.258 }, 00:20:47.258 "auth": { 00:20:47.258 "state": "completed", 00:20:47.258 "digest": "sha512", 00:20:47.258 "dhgroup": "ffdhe3072" 00:20:47.258 } 00:20:47.258 } 00:20:47.258 ]' 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:47.258 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:47.517 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:47.517 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:47.517 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:47.517 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.517 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.517 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.775 09:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.711 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.969 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.970 09:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.227 00:20:49.227 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:49.227 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:49.227 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:49.485 { 00:20:49.485 "cntlid": 115, 00:20:49.485 "qid": 0, 00:20:49.485 "state": "enabled", 00:20:49.485 "thread": "nvmf_tgt_poll_group_000", 00:20:49.485 "listen_address": { 00:20:49.485 "trtype": "TCP", 00:20:49.485 "adrfam": "IPv4", 00:20:49.485 "traddr": "10.0.0.2", 00:20:49.485 "trsvcid": "4420" 00:20:49.485 }, 00:20:49.485 "peer_address": { 00:20:49.485 "trtype": "TCP", 00:20:49.485 "adrfam": "IPv4", 00:20:49.485 "traddr": "10.0.0.1", 00:20:49.485 "trsvcid": "48476" 00:20:49.485 }, 00:20:49.485 "auth": { 00:20:49.485 "state": "completed", 00:20:49.485 "digest": "sha512", 00:20:49.485 "dhgroup": "ffdhe3072" 00:20:49.485 } 00:20:49.485 } 00:20:49.485 ]' 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:49.485 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:49.743 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.743 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:49.743 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.743 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.743 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.003 09:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:50.941 09:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:51.199 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:51.199 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.199 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.199 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.200 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:51.458 00:20:51.458 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:51.458 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.458 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:51.715 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.715 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.715 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.715 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.715 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.716 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:51.716 { 00:20:51.716 "cntlid": 117, 00:20:51.716 "qid": 0, 00:20:51.716 "state": "enabled", 00:20:51.716 "thread": "nvmf_tgt_poll_group_000", 00:20:51.716 "listen_address": { 00:20:51.716 "trtype": "TCP", 00:20:51.716 "adrfam": "IPv4", 00:20:51.716 "traddr": "10.0.0.2", 00:20:51.716 "trsvcid": "4420" 00:20:51.716 }, 00:20:51.716 "peer_address": { 00:20:51.716 "trtype": "TCP", 00:20:51.716 "adrfam": "IPv4", 00:20:51.716 "traddr": "10.0.0.1", 00:20:51.716 "trsvcid": "48510" 00:20:51.716 }, 00:20:51.716 "auth": { 00:20:51.716 "state": "completed", 00:20:51.716 "digest": "sha512", 00:20:51.716 "dhgroup": "ffdhe3072" 00:20:51.716 } 00:20:51.716 } 00:20:51.716 ]' 00:20:51.716 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:51.716 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.716 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:51.716 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.716 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.978 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.978 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.978 09:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.978 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:20:52.950 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.950 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:52.950 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.950 09:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.950 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.950 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.950 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:52.950 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.207 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:53.771 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.771 { 00:20:53.771 "cntlid": 119, 00:20:53.771 "qid": 0, 00:20:53.771 "state": "enabled", 00:20:53.771 "thread": "nvmf_tgt_poll_group_000", 00:20:53.771 "listen_address": { 00:20:53.771 "trtype": "TCP", 00:20:53.771 "adrfam": "IPv4", 00:20:53.771 "traddr": "10.0.0.2", 00:20:53.771 "trsvcid": "4420" 00:20:53.771 }, 00:20:53.771 "peer_address": { 00:20:53.771 "trtype": "TCP", 00:20:53.771 "adrfam": "IPv4", 00:20:53.771 "traddr": "10.0.0.1", 00:20:53.771 "trsvcid": "48546" 00:20:53.771 }, 00:20:53.771 "auth": { 00:20:53.771 "state": "completed", 00:20:53.771 "digest": "sha512", 00:20:53.771 "dhgroup": "ffdhe3072" 00:20:53.771 } 00:20:53.771 } 00:20:53.771 ]' 00:20:53.771 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.028 09:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.285 09:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.221 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.477 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.478 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.039 00:20:56.039 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.039 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.039 09:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.039 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.296 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.296 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.296 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.296 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.296 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.296 { 00:20:56.296 "cntlid": 121, 00:20:56.297 "qid": 0, 00:20:56.297 "state": "enabled", 00:20:56.297 "thread": "nvmf_tgt_poll_group_000", 00:20:56.297 "listen_address": { 00:20:56.297 "trtype": "TCP", 00:20:56.297 "adrfam": "IPv4", 00:20:56.297 "traddr": "10.0.0.2", 00:20:56.297 "trsvcid": "4420" 00:20:56.297 }, 00:20:56.297 "peer_address": { 00:20:56.297 "trtype": "TCP", 00:20:56.297 "adrfam": "IPv4", 00:20:56.297 "traddr": "10.0.0.1", 00:20:56.297 "trsvcid": "48594" 00:20:56.297 }, 00:20:56.297 "auth": { 00:20:56.297 "state": "completed", 00:20:56.297 "digest": "sha512", 00:20:56.297 "dhgroup": "ffdhe4096" 00:20:56.297 } 00:20:56.297 } 00:20:56.297 ]' 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.297 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.555 09:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.485 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.742 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.999 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.999 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.999 09:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.258 00:20:58.258 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.258 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:58.258 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.516 { 00:20:58.516 "cntlid": 123, 00:20:58.516 "qid": 0, 00:20:58.516 "state": "enabled", 00:20:58.516 "thread": "nvmf_tgt_poll_group_000", 00:20:58.516 "listen_address": { 00:20:58.516 "trtype": "TCP", 00:20:58.516 "adrfam": "IPv4", 00:20:58.516 "traddr": "10.0.0.2", 00:20:58.516 "trsvcid": "4420" 00:20:58.516 }, 00:20:58.516 "peer_address": { 00:20:58.516 "trtype": "TCP", 00:20:58.516 "adrfam": "IPv4", 00:20:58.516 "traddr": "10.0.0.1", 00:20:58.516 "trsvcid": "43656" 00:20:58.516 }, 00:20:58.516 "auth": { 00:20:58.516 "state": "completed", 00:20:58.516 "digest": "sha512", 00:20:58.516 "dhgroup": "ffdhe4096" 00:20:58.516 } 00:20:58.516 } 00:20:58.516 ]' 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.516 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.517 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.773 09:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.143 09:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.143 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.399 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.659 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.917 { 00:21:00.917 "cntlid": 125, 00:21:00.917 "qid": 0, 00:21:00.917 "state": "enabled", 00:21:00.917 "thread": "nvmf_tgt_poll_group_000", 00:21:00.917 "listen_address": { 00:21:00.917 "trtype": "TCP", 00:21:00.917 "adrfam": "IPv4", 00:21:00.917 "traddr": "10.0.0.2", 00:21:00.917 "trsvcid": "4420" 00:21:00.917 }, 00:21:00.917 "peer_address": { 00:21:00.917 "trtype": "TCP", 00:21:00.917 "adrfam": "IPv4", 00:21:00.917 "traddr": "10.0.0.1", 00:21:00.917 "trsvcid": "43686" 00:21:00.917 }, 00:21:00.917 "auth": { 00:21:00.917 "state": "completed", 00:21:00.917 "digest": "sha512", 00:21:00.917 "dhgroup": "ffdhe4096" 00:21:00.917 } 00:21:00.917 } 00:21:00.917 ]' 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.917 09:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.175 09:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.107 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.107 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.364 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.928 00:21:02.928 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.928 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.928 09:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.928 { 00:21:02.928 "cntlid": 127, 00:21:02.928 "qid": 0, 00:21:02.928 "state": "enabled", 00:21:02.928 "thread": "nvmf_tgt_poll_group_000", 00:21:02.928 "listen_address": { 00:21:02.928 "trtype": "TCP", 00:21:02.928 "adrfam": "IPv4", 00:21:02.928 "traddr": "10.0.0.2", 00:21:02.928 "trsvcid": "4420" 00:21:02.928 }, 00:21:02.928 "peer_address": { 00:21:02.928 "trtype": "TCP", 00:21:02.928 "adrfam": "IPv4", 00:21:02.928 "traddr": "10.0.0.1", 00:21:02.928 "trsvcid": "43710" 00:21:02.928 }, 00:21:02.928 "auth": { 00:21:02.928 "state": "completed", 00:21:02.928 "digest": "sha512", 00:21:02.928 "dhgroup": "ffdhe4096" 00:21:02.928 } 00:21:02.928 } 00:21:02.928 ]' 00:21:02.928 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.185 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.442 09:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.373 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.630 09:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.194 00:21:05.194 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.194 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.194 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.456 { 00:21:05.456 "cntlid": 129, 00:21:05.456 "qid": 0, 00:21:05.456 "state": "enabled", 00:21:05.456 "thread": "nvmf_tgt_poll_group_000", 00:21:05.456 "listen_address": { 00:21:05.456 "trtype": "TCP", 00:21:05.456 "adrfam": "IPv4", 00:21:05.456 "traddr": "10.0.0.2", 00:21:05.456 "trsvcid": "4420" 00:21:05.456 }, 00:21:05.456 "peer_address": { 00:21:05.456 "trtype": "TCP", 00:21:05.456 "adrfam": "IPv4", 00:21:05.456 "traddr": "10.0.0.1", 00:21:05.456 "trsvcid": "43740" 00:21:05.456 }, 00:21:05.456 "auth": { 00:21:05.456 "state": "completed", 00:21:05.456 "digest": "sha512", 00:21:05.456 "dhgroup": "ffdhe6144" 00:21:05.456 } 00:21:05.456 } 00:21:05.456 ]' 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.456 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.714 09:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.646 09:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.903 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.468 00:21:07.468 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:07.468 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:07.468 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.725 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.725 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.725 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.726 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.726 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.726 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:07.726 { 00:21:07.726 "cntlid": 131, 00:21:07.726 "qid": 0, 00:21:07.726 "state": "enabled", 00:21:07.726 "thread": "nvmf_tgt_poll_group_000", 00:21:07.726 "listen_address": { 00:21:07.726 "trtype": "TCP", 00:21:07.726 "adrfam": "IPv4", 00:21:07.726 "traddr": "10.0.0.2", 00:21:07.726 "trsvcid": "4420" 00:21:07.726 }, 00:21:07.726 "peer_address": { 00:21:07.726 "trtype": "TCP", 00:21:07.726 "adrfam": "IPv4", 00:21:07.726 "traddr": "10.0.0.1", 00:21:07.726 "trsvcid": "43764" 00:21:07.726 }, 00:21:07.726 "auth": { 00:21:07.726 "state": "completed", 00:21:07.726 "digest": "sha512", 00:21:07.726 "dhgroup": "ffdhe6144" 00:21:07.726 } 00:21:07.726 } 00:21:07.726 ]' 00:21:07.726 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.983 09:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.241 09:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.234 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.492 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.058 00:21:10.058 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.058 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.058 09:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.316 { 00:21:10.316 "cntlid": 133, 00:21:10.316 "qid": 0, 00:21:10.316 "state": "enabled", 00:21:10.316 "thread": "nvmf_tgt_poll_group_000", 00:21:10.316 "listen_address": { 00:21:10.316 "trtype": "TCP", 00:21:10.316 "adrfam": "IPv4", 00:21:10.316 "traddr": "10.0.0.2", 00:21:10.316 "trsvcid": "4420" 00:21:10.316 }, 00:21:10.316 "peer_address": { 00:21:10.316 "trtype": "TCP", 00:21:10.316 "adrfam": "IPv4", 00:21:10.316 "traddr": "10.0.0.1", 00:21:10.316 "trsvcid": "60924" 00:21:10.316 }, 00:21:10.316 "auth": { 00:21:10.316 "state": "completed", 00:21:10.316 "digest": "sha512", 00:21:10.316 "dhgroup": "ffdhe6144" 00:21:10.316 } 00:21:10.316 } 00:21:10.316 ]' 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.316 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.574 09:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:21:11.508 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.765 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:11.765 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.765 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.765 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.765 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.766 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:11.766 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.024 09:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.590 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.590 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.590 { 00:21:12.590 "cntlid": 135, 00:21:12.590 "qid": 0, 00:21:12.590 "state": "enabled", 00:21:12.590 "thread": "nvmf_tgt_poll_group_000", 00:21:12.590 "listen_address": { 00:21:12.590 "trtype": "TCP", 00:21:12.590 "adrfam": "IPv4", 00:21:12.590 "traddr": "10.0.0.2", 00:21:12.590 "trsvcid": "4420" 00:21:12.590 }, 00:21:12.590 "peer_address": { 00:21:12.590 "trtype": "TCP", 00:21:12.590 "adrfam": "IPv4", 00:21:12.590 "traddr": "10.0.0.1", 00:21:12.590 "trsvcid": "60956" 00:21:12.590 }, 00:21:12.590 "auth": { 00:21:12.590 "state": "completed", 00:21:12.590 "digest": "sha512", 00:21:12.590 "dhgroup": "ffdhe6144" 00:21:12.590 } 00:21:12.590 } 00:21:12.590 ]' 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.848 09:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.106 09:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:21:14.039 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.039 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:14.039 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.039 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.039 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.040 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:14.040 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.040 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.040 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.297 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.298 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:14.298 09:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.231 00:21:15.231 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.231 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.231 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.488 { 00:21:15.488 "cntlid": 137, 00:21:15.488 "qid": 0, 00:21:15.488 "state": "enabled", 00:21:15.488 "thread": "nvmf_tgt_poll_group_000", 00:21:15.488 "listen_address": { 00:21:15.488 "trtype": "TCP", 00:21:15.488 "adrfam": "IPv4", 00:21:15.488 "traddr": "10.0.0.2", 00:21:15.488 "trsvcid": "4420" 00:21:15.488 }, 00:21:15.488 "peer_address": { 00:21:15.488 "trtype": "TCP", 00:21:15.488 "adrfam": "IPv4", 00:21:15.488 "traddr": "10.0.0.1", 00:21:15.488 "trsvcid": "32768" 00:21:15.488 }, 00:21:15.488 "auth": { 00:21:15.488 "state": "completed", 00:21:15.488 "digest": "sha512", 00:21:15.488 "dhgroup": "ffdhe8192" 00:21:15.488 } 00:21:15.488 } 00:21:15.488 ]' 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.488 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.053 09:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:16.986 09:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.244 09:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:18.177 00:21:18.177 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:18.177 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:18.177 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.435 { 00:21:18.435 "cntlid": 139, 00:21:18.435 "qid": 0, 00:21:18.435 "state": "enabled", 00:21:18.435 "thread": "nvmf_tgt_poll_group_000", 00:21:18.435 "listen_address": { 00:21:18.435 "trtype": "TCP", 00:21:18.435 "adrfam": "IPv4", 00:21:18.435 "traddr": "10.0.0.2", 00:21:18.435 "trsvcid": "4420" 00:21:18.435 }, 00:21:18.435 "peer_address": { 00:21:18.435 "trtype": "TCP", 00:21:18.435 "adrfam": "IPv4", 00:21:18.435 "traddr": "10.0.0.1", 00:21:18.435 "trsvcid": "32814" 00:21:18.435 }, 00:21:18.435 "auth": { 00:21:18.435 "state": "completed", 00:21:18.435 "digest": "sha512", 00:21:18.435 "dhgroup": "ffdhe8192" 00:21:18.435 } 00:21:18.435 } 00:21:18.435 ]' 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.435 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.693 09:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:01:NDRiNGY2NTIwYTdkYTFhYWQxNmQ2MGQ2N2I0ZWFmYmVJe4xc: --dhchap-ctrl-secret DHHC-1:02:Y2Y2MTAxOWZhZjlhYTk2NmY5NTAwYzM4YjQyMmU0MzJhOTU3ZTFjYmViNDZlNmQy0QMNqg==: 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.627 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.885 09:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.818 00:21:20.818 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.818 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.818 09:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:21.076 { 00:21:21.076 "cntlid": 141, 00:21:21.076 "qid": 0, 00:21:21.076 "state": "enabled", 00:21:21.076 "thread": "nvmf_tgt_poll_group_000", 00:21:21.076 "listen_address": { 00:21:21.076 "trtype": "TCP", 00:21:21.076 "adrfam": "IPv4", 00:21:21.076 "traddr": "10.0.0.2", 00:21:21.076 "trsvcid": "4420" 00:21:21.076 }, 00:21:21.076 "peer_address": { 00:21:21.076 "trtype": "TCP", 00:21:21.076 "adrfam": "IPv4", 00:21:21.076 "traddr": "10.0.0.1", 00:21:21.076 "trsvcid": "53270" 00:21:21.076 }, 00:21:21.076 "auth": { 00:21:21.076 "state": "completed", 00:21:21.076 "digest": "sha512", 00:21:21.076 "dhgroup": "ffdhe8192" 00:21:21.076 } 00:21:21.076 } 00:21:21.076 ]' 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:21.076 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:21.077 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:21.334 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.334 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.334 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.592 09:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:02:ZGRjNjI4NzQ2NWEzYmJjODY4ZDMzNjBjMzY3YTE0YmQzMDRmNmMyN2Q1YjQxNWViLf5afA==: --dhchap-ctrl-secret DHHC-1:01:MGQyZTJhZGVjNjU4OWQyZDZiOGRhYmQ1NDU4MzRiYTAh9kyT: 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.526 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.785 09:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.718 00:21:23.718 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:23.718 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:23.718 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:23.976 { 00:21:23.976 "cntlid": 143, 00:21:23.976 "qid": 0, 00:21:23.976 "state": "enabled", 00:21:23.976 "thread": "nvmf_tgt_poll_group_000", 00:21:23.976 "listen_address": { 00:21:23.976 "trtype": "TCP", 00:21:23.976 "adrfam": "IPv4", 00:21:23.976 "traddr": "10.0.0.2", 00:21:23.976 "trsvcid": "4420" 00:21:23.976 }, 00:21:23.976 "peer_address": { 00:21:23.976 "trtype": "TCP", 00:21:23.976 "adrfam": "IPv4", 00:21:23.976 "traddr": "10.0.0.1", 00:21:23.976 "trsvcid": "53292" 00:21:23.976 }, 00:21:23.976 "auth": { 00:21:23.976 "state": "completed", 00:21:23.976 "digest": "sha512", 00:21:23.976 "dhgroup": "ffdhe8192" 00:21:23.976 } 00:21:23.976 } 00:21:23.976 ]' 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.976 09:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:23.976 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.976 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:23.976 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.976 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.976 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.234 09:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.168 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.425 09:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.359 00:21:26.359 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.359 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.359 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.617 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.617 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.617 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.617 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.617 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.617 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:26.617 { 00:21:26.617 "cntlid": 145, 00:21:26.618 "qid": 0, 00:21:26.618 "state": "enabled", 00:21:26.618 "thread": "nvmf_tgt_poll_group_000", 00:21:26.618 "listen_address": { 00:21:26.618 "trtype": "TCP", 00:21:26.618 "adrfam": "IPv4", 00:21:26.618 "traddr": "10.0.0.2", 00:21:26.618 "trsvcid": "4420" 00:21:26.618 }, 00:21:26.618 "peer_address": { 00:21:26.618 "trtype": "TCP", 00:21:26.618 "adrfam": "IPv4", 00:21:26.618 "traddr": "10.0.0.1", 00:21:26.618 "trsvcid": "53316" 00:21:26.618 }, 00:21:26.618 "auth": { 00:21:26.618 "state": "completed", 00:21:26.618 "digest": "sha512", 00:21:26.618 "dhgroup": "ffdhe8192" 00:21:26.618 } 00:21:26.618 } 00:21:26.618 ]' 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.618 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.875 09:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:00:NTE3ZTk3NGIwYzMwZWU2MWM0YzQ0ODRhODUxMWZmZTVkZjkwZjAzYjcyOGJmMDZlgacdWA==: --dhchap-ctrl-secret DHHC-1:03:MjQ1ZjRhZTg1NWFkNDljODdiYzQxNGQyM2JmYjg4MzcxMjA3NTY2NjcyZWJlNjI4MzY1Mjk4OTBlNjNkZDU3N+HJmNk=: 00:21:28.286 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.286 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:28.286 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.286 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.286 09:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.286 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:28.852 request: 00:21:28.852 { 00:21:28.852 "name": "nvme0", 00:21:28.852 "trtype": "tcp", 00:21:28.852 "traddr": "10.0.0.2", 00:21:28.852 "adrfam": "ipv4", 00:21:28.852 "trsvcid": "4420", 00:21:28.852 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:28.852 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:28.852 "prchk_reftag": false, 00:21:28.852 "prchk_guard": false, 00:21:28.852 "hdgst": false, 00:21:28.852 "ddgst": false, 00:21:28.852 "dhchap_key": "key2", 00:21:28.852 "method": "bdev_nvme_attach_controller", 00:21:28.852 "req_id": 1 00:21:28.852 } 00:21:28.852 Got JSON-RPC error response 00:21:28.852 response: 00:21:28.852 { 00:21:28.852 "code": -5, 00:21:28.852 "message": "Input/output error" 00:21:28.852 } 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:28.852 09:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:29.786 request: 00:21:29.786 { 00:21:29.786 "name": "nvme0", 00:21:29.786 "trtype": "tcp", 00:21:29.786 "traddr": "10.0.0.2", 00:21:29.786 "adrfam": "ipv4", 00:21:29.786 "trsvcid": "4420", 00:21:29.786 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:29.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:29.786 "prchk_reftag": false, 00:21:29.786 "prchk_guard": false, 00:21:29.786 "hdgst": false, 00:21:29.786 "ddgst": false, 00:21:29.786 "dhchap_key": "key1", 00:21:29.786 "dhchap_ctrlr_key": "ckey2", 00:21:29.786 "method": "bdev_nvme_attach_controller", 00:21:29.786 "req_id": 1 00:21:29.786 } 00:21:29.786 Got JSON-RPC error response 00:21:29.786 response: 00:21:29.786 { 00:21:29.786 "code": -5, 00:21:29.786 "message": "Input/output error" 00:21:29.786 } 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.786 09:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:30.721 request: 00:21:30.721 { 00:21:30.721 "name": "nvme0", 00:21:30.721 "trtype": "tcp", 00:21:30.721 "traddr": "10.0.0.2", 00:21:30.721 "adrfam": "ipv4", 00:21:30.721 "trsvcid": "4420", 00:21:30.721 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:30.721 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:30.721 "prchk_reftag": false, 00:21:30.721 "prchk_guard": false, 00:21:30.721 "hdgst": false, 00:21:30.721 "ddgst": false, 00:21:30.721 "dhchap_key": "key1", 00:21:30.721 "dhchap_ctrlr_key": "ckey1", 00:21:30.721 "method": "bdev_nvme_attach_controller", 00:21:30.721 "req_id": 1 00:21:30.721 } 00:21:30.721 Got JSON-RPC error response 00:21:30.721 response: 00:21:30.721 { 00:21:30.721 "code": -5, 00:21:30.721 "message": "Input/output error" 00:21:30.721 } 00:21:30.721 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:30.721 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:30.721 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3776065 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3776065 ']' 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3776065 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3776065 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3776065' 00:21:30.722 killing process with pid 3776065 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3776065 00:21:30.722 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3776065 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3798442 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3798442 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3798442 ']' 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:30.980 09:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3798442 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3798442 ']' 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.238 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.239 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.239 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.239 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.497 09:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:32.431 00:21:32.431 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:32.431 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:32.431 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:32.689 { 00:21:32.689 "cntlid": 1, 00:21:32.689 "qid": 0, 00:21:32.689 "state": "enabled", 00:21:32.689 "thread": "nvmf_tgt_poll_group_000", 00:21:32.689 "listen_address": { 00:21:32.689 "trtype": "TCP", 00:21:32.689 "adrfam": "IPv4", 00:21:32.689 "traddr": "10.0.0.2", 00:21:32.689 "trsvcid": "4420" 00:21:32.689 }, 00:21:32.689 "peer_address": { 00:21:32.689 "trtype": "TCP", 00:21:32.689 "adrfam": "IPv4", 00:21:32.689 "traddr": "10.0.0.1", 00:21:32.689 "trsvcid": "56296" 00:21:32.689 }, 00:21:32.689 "auth": { 00:21:32.689 "state": "completed", 00:21:32.689 "digest": "sha512", 00:21:32.689 "dhgroup": "ffdhe8192" 00:21:32.689 } 00:21:32.689 } 00:21:32.689 ]' 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.689 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.946 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.946 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.946 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.946 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.946 09:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.205 09:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-secret DHHC-1:03:ZTllNDFkOTliYTI1NjgyMzUzNWEzM2QwNzU5Mzc5MDlmMTRhOTExOTBjOTRlZTk4NDNhYjNmYzhkNWQ5Y2M3MQurgqU=: 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:34.138 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.396 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.654 request: 00:21:34.654 { 00:21:34.654 "name": "nvme0", 00:21:34.654 "trtype": "tcp", 00:21:34.654 "traddr": "10.0.0.2", 00:21:34.654 "adrfam": "ipv4", 00:21:34.654 "trsvcid": "4420", 00:21:34.654 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:34.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:34.654 "prchk_reftag": false, 00:21:34.654 "prchk_guard": false, 00:21:34.654 "hdgst": false, 00:21:34.654 "ddgst": false, 00:21:34.654 "dhchap_key": "key3", 00:21:34.654 "method": "bdev_nvme_attach_controller", 00:21:34.654 "req_id": 1 00:21:34.654 } 00:21:34.654 Got JSON-RPC error response 00:21:34.654 response: 00:21:34.654 { 00:21:34.654 "code": -5, 00:21:34.654 "message": "Input/output error" 00:21:34.654 } 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.654 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.912 09:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.169 request: 00:21:35.169 { 00:21:35.169 "name": "nvme0", 00:21:35.169 "trtype": "tcp", 00:21:35.169 "traddr": "10.0.0.2", 00:21:35.169 "adrfam": "ipv4", 00:21:35.169 "trsvcid": "4420", 00:21:35.169 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:35.169 "prchk_reftag": false, 00:21:35.169 "prchk_guard": false, 00:21:35.169 "hdgst": false, 00:21:35.169 "ddgst": false, 00:21:35.169 "dhchap_key": "key3", 00:21:35.169 "method": "bdev_nvme_attach_controller", 00:21:35.169 "req_id": 1 00:21:35.169 } 00:21:35.169 Got JSON-RPC error response 00:21:35.169 response: 00:21:35.169 { 00:21:35.169 "code": -5, 00:21:35.169 "message": "Input/output error" 00:21:35.169 } 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.169 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.170 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.427 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:35.685 request: 00:21:35.685 { 00:21:35.685 "name": "nvme0", 00:21:35.685 "trtype": "tcp", 00:21:35.685 "traddr": "10.0.0.2", 00:21:35.685 "adrfam": "ipv4", 00:21:35.685 "trsvcid": "4420", 00:21:35.685 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:35.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:21:35.685 "prchk_reftag": false, 00:21:35.685 "prchk_guard": false, 00:21:35.685 "hdgst": false, 00:21:35.685 "ddgst": false, 00:21:35.685 "dhchap_key": "key0", 00:21:35.685 "dhchap_ctrlr_key": "key1", 00:21:35.685 "method": "bdev_nvme_attach_controller", 00:21:35.685 "req_id": 1 00:21:35.685 } 00:21:35.685 Got JSON-RPC error response 00:21:35.685 response: 00:21:35.685 { 00:21:35.685 "code": -5, 00:21:35.685 "message": "Input/output error" 00:21:35.685 } 00:21:35.685 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:35.685 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.685 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.685 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.685 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.685 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:35.942 00:21:35.942 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:35.942 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.942 09:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:36.199 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.199 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.199 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.456 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:36.456 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:36.456 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3776172 00:21:36.456 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3776172 ']' 00:21:36.456 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3776172 00:21:36.456 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3776172 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3776172' 00:21:36.457 killing process with pid 3776172 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3776172 00:21:36.457 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3776172 00:21:37.021 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:37.022 rmmod nvme_tcp 00:21:37.022 rmmod nvme_fabrics 00:21:37.022 rmmod nvme_keyring 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3798442 ']' 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3798442 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3798442 ']' 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3798442 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3798442 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3798442' 00:21:37.022 killing process with pid 3798442 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3798442 00:21:37.022 09:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3798442 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:37.281 09:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.fS2 /tmp/spdk.key-sha256.vSv /tmp/spdk.key-sha384.QQg /tmp/spdk.key-sha512.sGO /tmp/spdk.key-sha512.lfq /tmp/spdk.key-sha384.Ilf /tmp/spdk.key-sha256.MSI '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:39.185 00:21:39.185 real 3m8.034s 00:21:39.185 user 7m17.633s 00:21:39.185 sys 0m24.583s 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 ************************************ 00:21:39.185 END TEST nvmf_auth_target 00:21:39.185 ************************************ 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.185 ************************************ 00:21:39.185 START TEST nvmf_bdevio_no_huge 00:21:39.185 ************************************ 00:21:39.185 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:39.443 * Looking for test storage... 00:21:39.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:39.443 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:39.443 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:39.443 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.443 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.443 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.443 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:39.444 09:07:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:41.346 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:41.347 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:41.347 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:41.347 Found net devices under 0000:09:00.0: cvl_0_0 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:41.347 Found net devices under 0000:09:00.1: cvl_0_1 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:41.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:21:41.347 00:21:41.347 --- 10.0.0.2 ping statistics --- 00:21:41.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.347 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:41.347 00:21:41.347 --- 10.0.0.1 ping statistics --- 00:21:41.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.347 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3801202 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3801202 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3801202 ']' 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.347 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.347 [2024-07-24 09:07:19.430027] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:21:41.347 [2024-07-24 09:07:19.430125] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:41.606 [2024-07-24 09:07:19.479534] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.606 [2024-07-24 09:07:19.501619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.606 [2024-07-24 09:07:19.590392] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.606 [2024-07-24 09:07:19.590452] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.606 [2024-07-24 09:07:19.590469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.606 [2024-07-24 09:07:19.590482] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.606 [2024-07-24 09:07:19.590493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.606 [2024-07-24 09:07:19.590586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.606 [2024-07-24 09:07:19.590622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:21:41.606 [2024-07-24 09:07:19.590675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:21:41.606 [2024-07-24 09:07:19.590677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.606 [2024-07-24 09:07:19.710563] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.606 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.865 Malloc0 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:41.865 [2024-07-24 09:07:19.748412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.865 { 00:21:41.865 "params": { 00:21:41.865 "name": "Nvme$subsystem", 00:21:41.865 "trtype": "$TEST_TRANSPORT", 00:21:41.865 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.865 "adrfam": "ipv4", 00:21:41.865 "trsvcid": "$NVMF_PORT", 00:21:41.865 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.865 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.865 "hdgst": ${hdgst:-false}, 00:21:41.865 "ddgst": ${ddgst:-false} 00:21:41.865 }, 00:21:41.865 "method": "bdev_nvme_attach_controller" 00:21:41.865 } 00:21:41.865 EOF 00:21:41.865 )") 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:41.865 09:07:19 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.865 "params": { 00:21:41.865 "name": "Nvme1", 00:21:41.865 "trtype": "tcp", 00:21:41.865 "traddr": "10.0.0.2", 00:21:41.865 "adrfam": "ipv4", 00:21:41.865 "trsvcid": "4420", 00:21:41.865 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.865 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.865 "hdgst": false, 00:21:41.865 "ddgst": false 00:21:41.865 }, 00:21:41.865 "method": "bdev_nvme_attach_controller" 00:21:41.865 }' 00:21:41.865 [2024-07-24 09:07:19.792437] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:21:41.865 [2024-07-24 09:07:19.792531] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3801241 ] 00:21:41.865 [2024-07-24 09:07:19.833166] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:41.865 [2024-07-24 09:07:19.852368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:41.865 [2024-07-24 09:07:19.935757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.865 [2024-07-24 09:07:19.935807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.865 [2024-07-24 09:07:19.935810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.123 I/O targets: 00:21:42.123 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:42.123 00:21:42.123 00:21:42.123 CUnit - A unit testing framework for C - Version 2.1-3 00:21:42.123 http://cunit.sourceforge.net/ 00:21:42.123 00:21:42.123 00:21:42.123 Suite: bdevio tests on: Nvme1n1 00:21:42.123 Test: blockdev write read block ...passed 00:21:42.123 Test: blockdev write zeroes read block ...passed 00:21:42.123 Test: blockdev write zeroes read no split ...passed 00:21:42.381 Test: blockdev write zeroes read split ...passed 00:21:42.381 Test: blockdev write zeroes read split partial ...passed 00:21:42.381 Test: blockdev reset ...[2024-07-24 09:07:20.259236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.381 [2024-07-24 09:07:20.259341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x974330 (9): Bad file descriptor 00:21:42.381 [2024-07-24 09:07:20.272704] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:42.381 passed 00:21:42.381 Test: blockdev write read 8 blocks ...passed 00:21:42.381 Test: blockdev write read size > 128k ...passed 00:21:42.381 Test: blockdev write read invalid size ...passed 00:21:42.381 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:42.381 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:42.381 Test: blockdev write read max offset ...passed 00:21:42.381 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:42.381 Test: blockdev writev readv 8 blocks ...passed 00:21:42.381 Test: blockdev writev readv 30 x 1block ...passed 00:21:42.639 Test: blockdev writev readv block ...passed 00:21:42.639 Test: blockdev writev readv size > 128k ...passed 00:21:42.639 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:42.639 Test: blockdev comparev and writev ...[2024-07-24 09:07:20.528047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.528083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.528115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.528135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.528478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.528502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.528524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.528540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.528865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.528888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.528915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.528932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.529254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.529277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:42.639 [2024-07-24 09:07:20.529299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:42.639 [2024-07-24 09:07:20.529314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:42.639 passed 00:21:42.639 Test: blockdev nvme passthru rw ...passed 00:21:42.640 Test: blockdev nvme passthru vendor specific ...[2024-07-24 09:07:20.613459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.640 [2024-07-24 09:07:20.613487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:42.640 [2024-07-24 09:07:20.613655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.640 [2024-07-24 09:07:20.613678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:42.640 [2024-07-24 09:07:20.613852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.640 [2024-07-24 09:07:20.613874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:42.640 [2024-07-24 09:07:20.614046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:42.640 [2024-07-24 09:07:20.614069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:42.640 passed 00:21:42.640 Test: blockdev nvme admin passthru ...passed 00:21:42.640 Test: blockdev copy ...passed 00:21:42.640 00:21:42.640 Run Summary: Type Total Ran Passed Failed Inactive 00:21:42.640 suites 1 1 n/a 0 0 00:21:42.640 tests 23 23 23 0 0 00:21:42.640 asserts 152 152 152 0 n/a 00:21:42.640 00:21:42.640 Elapsed time = 1.071 seconds 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.898 09:07:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.898 rmmod nvme_tcp 00:21:43.157 rmmod nvme_fabrics 00:21:43.157 rmmod nvme_keyring 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3801202 ']' 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3801202 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3801202 ']' 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3801202 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3801202 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3801202' 00:21:43.157 killing process with pid 3801202 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3801202 00:21:43.157 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3801202 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.415 09:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.959 00:21:45.959 real 0m6.252s 00:21:45.959 user 0m9.842s 00:21:45.959 sys 0m2.403s 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:45.959 ************************************ 00:21:45.959 END TEST nvmf_bdevio_no_huge 00:21:45.959 ************************************ 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.959 ************************************ 00:21:45.959 START TEST nvmf_tls 00:21:45.959 ************************************ 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:45.959 * Looking for test storage... 00:21:45.959 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.959 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.960 09:07:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:47.869 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.869 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:47.870 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:47.870 Found net devices under 0000:09:00.0: cvl_0_0 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:47.870 Found net devices under 0000:09:00.1: cvl_0_1 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:47.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:21:47.870 00:21:47.870 --- 10.0.0.2 ping statistics --- 00:21:47.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.870 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:47.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:21:47.870 00:21:47.870 --- 10.0.0.1 ping statistics --- 00:21:47.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.870 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3803310 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3803310 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3803310 ']' 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:47.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:47.870 09:07:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:47.870 [2024-07-24 09:07:25.766568] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:21:47.870 [2024-07-24 09:07:25.766651] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:47.870 EAL: No free 2048 kB hugepages reported on node 1 00:21:47.870 [2024-07-24 09:07:25.807243] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:47.870 [2024-07-24 09:07:25.833209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.870 [2024-07-24 09:07:25.922756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:47.870 [2024-07-24 09:07:25.922826] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:47.870 [2024-07-24 09:07:25.922842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:47.870 [2024-07-24 09:07:25.922857] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:47.870 [2024-07-24 09:07:25.922868] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:47.870 [2024-07-24 09:07:25.922897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:48.128 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:48.385 true 00:21:48.386 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:48.386 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:48.644 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:48.644 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:48.644 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:48.902 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:48.902 09:07:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:49.160 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:49.160 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:49.160 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:49.418 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.418 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:49.677 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:49.677 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:49.677 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:49.677 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:49.935 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:49.935 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:49.935 09:07:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:50.193 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.193 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:50.451 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:50.451 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:50.451 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:50.709 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:50.709 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.GZad1oWyqs 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.mEFOWKcEUr 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.GZad1oWyqs 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.mEFOWKcEUr 00:21:50.968 09:07:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:51.226 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:51.793 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.GZad1oWyqs 00:21:51.793 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.GZad1oWyqs 00:21:51.793 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:52.051 [2024-07-24 09:07:29.936658] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.051 09:07:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:52.309 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:52.567 [2024-07-24 09:07:30.425966] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:52.567 [2024-07-24 09:07:30.426248] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.567 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:52.825 malloc0 00:21:52.825 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:53.083 09:07:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GZad1oWyqs 00:21:53.083 [2024-07-24 09:07:31.164352] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:53.083 09:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GZad1oWyqs 00:21:53.341 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.311 Initializing NVMe Controllers 00:22:03.311 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:03.311 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:03.311 Initialization complete. Launching workers. 00:22:03.311 ======================================================== 00:22:03.311 Latency(us) 00:22:03.311 Device Information : IOPS MiB/s Average min max 00:22:03.311 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7793.18 30.44 8214.95 1239.44 9158.46 00:22:03.311 ======================================================== 00:22:03.311 Total : 7793.18 30.44 8214.95 1239.44 9158.46 00:22:03.311 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GZad1oWyqs 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GZad1oWyqs' 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3805195 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3805195 /var/tmp/bdevperf.sock 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3805195 ']' 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.311 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.312 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.312 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:03.312 [2024-07-24 09:07:41.335883] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:03.312 [2024-07-24 09:07:41.335963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3805195 ] 00:22:03.312 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.312 [2024-07-24 09:07:41.365995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:03.312 [2024-07-24 09:07:41.393301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.570 [2024-07-24 09:07:41.479139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.570 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.570 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:03.570 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GZad1oWyqs 00:22:03.828 [2024-07-24 09:07:41.812689] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.828 [2024-07-24 09:07:41.812812] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:03.828 TLSTESTn1 00:22:03.828 09:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.088 Running I/O for 10 seconds... 00:22:14.050 00:22:14.050 Latency(us) 00:22:14.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.050 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:14.050 Verification LBA range: start 0x0 length 0x2000 00:22:14.050 TLSTESTn1 : 10.03 3261.10 12.74 0.00 0.00 39170.39 7670.14 60972.75 00:22:14.050 =================================================================================================================== 00:22:14.050 Total : 3261.10 12.74 0.00 0.00 39170.39 7670.14 60972.75 00:22:14.050 0 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3805195 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3805195 ']' 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3805195 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3805195 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3805195' 00:22:14.050 killing process with pid 3805195 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3805195 00:22:14.050 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.050 00:22:14.050 Latency(us) 00:22:14.050 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.050 =================================================================================================================== 00:22:14.050 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.050 [2024-07-24 09:07:52.097060] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.050 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3805195 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mEFOWKcEUr 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mEFOWKcEUr 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.mEFOWKcEUr 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.mEFOWKcEUr' 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3806390 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3806390 /var/tmp/bdevperf.sock 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3806390 ']' 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.309 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.309 [2024-07-24 09:07:52.376126] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:14.309 [2024-07-24 09:07:52.376226] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806390 ] 00:22:14.309 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.309 [2024-07-24 09:07:52.409587] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:14.567 [2024-07-24 09:07:52.438147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.567 [2024-07-24 09:07:52.524141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.567 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:14.567 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:14.567 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.mEFOWKcEUr 00:22:14.825 [2024-07-24 09:07:52.853997] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.825 [2024-07-24 09:07:52.854162] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:14.825 [2024-07-24 09:07:52.860479] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:14.825 [2024-07-24 09:07:52.861006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205b8d0 (107): Transport endpoint is not connected 00:22:14.825 [2024-07-24 09:07:52.861996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205b8d0 (9): Bad file descriptor 00:22:14.825 [2024-07-24 09:07:52.862996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:14.825 [2024-07-24 09:07:52.863016] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:14.825 [2024-07-24 09:07:52.863049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:14.825 request: 00:22:14.825 { 00:22:14.825 "name": "TLSTEST", 00:22:14.825 "trtype": "tcp", 00:22:14.825 "traddr": "10.0.0.2", 00:22:14.825 "adrfam": "ipv4", 00:22:14.825 "trsvcid": "4420", 00:22:14.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.825 "prchk_reftag": false, 00:22:14.825 "prchk_guard": false, 00:22:14.825 "hdgst": false, 00:22:14.825 "ddgst": false, 00:22:14.825 "psk": "/tmp/tmp.mEFOWKcEUr", 00:22:14.825 "method": "bdev_nvme_attach_controller", 00:22:14.825 "req_id": 1 00:22:14.825 } 00:22:14.825 Got JSON-RPC error response 00:22:14.825 response: 00:22:14.825 { 00:22:14.825 "code": -5, 00:22:14.825 "message": "Input/output error" 00:22:14.825 } 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3806390 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3806390 ']' 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3806390 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806390 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806390' 00:22:14.825 killing process with pid 3806390 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3806390 00:22:14.825 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.825 00:22:14.825 Latency(us) 00:22:14.825 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.825 =================================================================================================================== 00:22:14.825 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.825 [2024-07-24 09:07:52.913590] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:14.825 09:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3806390 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GZad1oWyqs 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GZad1oWyqs 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GZad1oWyqs 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:15.083 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GZad1oWyqs' 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3806526 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3806526 /var/tmp/bdevperf.sock 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3806526 ']' 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.084 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.084 [2024-07-24 09:07:53.182639] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:15.084 [2024-07-24 09:07:53.182727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806526 ] 00:22:15.355 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.355 [2024-07-24 09:07:53.214623] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:15.355 [2024-07-24 09:07:53.243259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.355 [2024-07-24 09:07:53.336196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:15.355 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.355 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:15.355 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.GZad1oWyqs 00:22:15.619 [2024-07-24 09:07:53.721435] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:15.619 [2024-07-24 09:07:53.721584] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:15.619 [2024-07-24 09:07:53.732757] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:15.619 [2024-07-24 09:07:53.732805] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:15.619 [2024-07-24 09:07:53.732861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:15.619 [2024-07-24 09:07:53.733452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe98d0 (107): Transport endpoint is not connected 00:22:15.619 [2024-07-24 09:07:53.734455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe98d0 (9): Bad file descriptor 00:22:15.876 [2024-07-24 09:07:53.735443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:15.876 [2024-07-24 09:07:53.735463] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:15.876 [2024-07-24 09:07:53.735482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:15.876 request: 00:22:15.876 { 00:22:15.876 "name": "TLSTEST", 00:22:15.876 "trtype": "tcp", 00:22:15.876 "traddr": "10.0.0.2", 00:22:15.876 "adrfam": "ipv4", 00:22:15.876 "trsvcid": "4420", 00:22:15.876 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.876 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:15.876 "prchk_reftag": false, 00:22:15.876 "prchk_guard": false, 00:22:15.876 "hdgst": false, 00:22:15.876 "ddgst": false, 00:22:15.876 "psk": "/tmp/tmp.GZad1oWyqs", 00:22:15.876 "method": "bdev_nvme_attach_controller", 00:22:15.876 "req_id": 1 00:22:15.876 } 00:22:15.876 Got JSON-RPC error response 00:22:15.876 response: 00:22:15.876 { 00:22:15.876 "code": -5, 00:22:15.876 "message": "Input/output error" 00:22:15.876 } 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3806526 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3806526 ']' 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3806526 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806526 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806526' 00:22:15.876 killing process with pid 3806526 00:22:15.876 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3806526 00:22:15.876 Received shutdown signal, test time was about 10.000000 seconds 00:22:15.876 00:22:15.876 Latency(us) 00:22:15.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:15.876 =================================================================================================================== 00:22:15.877 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:15.877 [2024-07-24 09:07:53.786859] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:15.877 09:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3806526 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GZad1oWyqs 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GZad1oWyqs 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GZad1oWyqs 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.GZad1oWyqs' 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3806661 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3806661 /var/tmp/bdevperf.sock 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3806661 ']' 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.134 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.134 [2024-07-24 09:07:54.055264] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:16.134 [2024-07-24 09:07:54.055344] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806661 ] 00:22:16.135 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.135 [2024-07-24 09:07:54.085547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.135 [2024-07-24 09:07:54.112454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.135 [2024-07-24 09:07:54.192637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.392 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.392 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.392 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.GZad1oWyqs 00:22:16.680 [2024-07-24 09:07:54.524567] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.680 [2024-07-24 09:07:54.524690] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:16.680 [2024-07-24 09:07:54.534709] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:16.680 [2024-07-24 09:07:54.534739] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:16.680 [2024-07-24 09:07:54.534791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:16.680 [2024-07-24 09:07:54.535502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c08d0 (107): Transport endpoint is not connected 00:22:16.680 [2024-07-24 09:07:54.536493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9c08d0 (9): Bad file descriptor 00:22:16.680 [2024-07-24 09:07:54.537493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:16.680 [2024-07-24 09:07:54.537525] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:16.680 [2024-07-24 09:07:54.537543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:16.680 request: 00:22:16.680 { 00:22:16.680 "name": "TLSTEST", 00:22:16.680 "trtype": "tcp", 00:22:16.680 "traddr": "10.0.0.2", 00:22:16.680 "adrfam": "ipv4", 00:22:16.680 "trsvcid": "4420", 00:22:16.680 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:16.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:16.680 "prchk_reftag": false, 00:22:16.680 "prchk_guard": false, 00:22:16.680 "hdgst": false, 00:22:16.680 "ddgst": false, 00:22:16.680 "psk": "/tmp/tmp.GZad1oWyqs", 00:22:16.680 "method": "bdev_nvme_attach_controller", 00:22:16.680 "req_id": 1 00:22:16.680 } 00:22:16.680 Got JSON-RPC error response 00:22:16.680 response: 00:22:16.680 { 00:22:16.680 "code": -5, 00:22:16.680 "message": "Input/output error" 00:22:16.680 } 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3806661 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3806661 ']' 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3806661 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806661 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806661' 00:22:16.680 killing process with pid 3806661 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3806661 00:22:16.680 Received shutdown signal, test time was about 10.000000 seconds 00:22:16.680 00:22:16.680 Latency(us) 00:22:16.680 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:16.680 =================================================================================================================== 00:22:16.680 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:16.680 [2024-07-24 09:07:54.580378] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:16.680 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3806661 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:16.938 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3806796 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3806796 /var/tmp/bdevperf.sock 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3806796 ']' 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.939 09:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.939 [2024-07-24 09:07:54.822894] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:16.939 [2024-07-24 09:07:54.822980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3806796 ] 00:22:16.939 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.939 [2024-07-24 09:07:54.854079] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:16.939 [2024-07-24 09:07:54.881539] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.939 [2024-07-24 09:07:54.966266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.196 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.196 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:17.196 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:17.196 [2024-07-24 09:07:55.286743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:17.196 [2024-07-24 09:07:55.288312] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f97de0 (9): Bad file descriptor 00:22:17.196 [2024-07-24 09:07:55.289308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:17.196 [2024-07-24 09:07:55.289328] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:17.196 [2024-07-24 09:07:55.289361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:17.196 request: 00:22:17.196 { 00:22:17.196 "name": "TLSTEST", 00:22:17.196 "trtype": "tcp", 00:22:17.196 "traddr": "10.0.0.2", 00:22:17.196 "adrfam": "ipv4", 00:22:17.196 "trsvcid": "4420", 00:22:17.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:17.196 "prchk_reftag": false, 00:22:17.196 "prchk_guard": false, 00:22:17.196 "hdgst": false, 00:22:17.196 "ddgst": false, 00:22:17.196 "method": "bdev_nvme_attach_controller", 00:22:17.197 "req_id": 1 00:22:17.197 } 00:22:17.197 Got JSON-RPC error response 00:22:17.197 response: 00:22:17.197 { 00:22:17.197 "code": -5, 00:22:17.197 "message": "Input/output error" 00:22:17.197 } 00:22:17.197 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3806796 00:22:17.197 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3806796 ']' 00:22:17.197 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3806796 00:22:17.197 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.197 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806796 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806796' 00:22:17.454 killing process with pid 3806796 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3806796 00:22:17.454 Received shutdown signal, test time was about 10.000000 seconds 00:22:17.454 00:22:17.454 Latency(us) 00:22:17.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.454 =================================================================================================================== 00:22:17.454 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3806796 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3803310 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3803310 ']' 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3803310 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.454 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3803310 00:22:17.712 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:17.712 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:17.712 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3803310' 00:22:17.712 killing process with pid 3803310 00:22:17.712 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3803310 00:22:17.712 [2024-07-24 09:07:55.578619] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:17.712 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3803310 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.UmdSYQy2jI 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.UmdSYQy2jI 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3806944 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3806944 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3806944 ']' 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.971 09:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.971 [2024-07-24 09:07:55.930697] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:17.971 [2024-07-24 09:07:55.930789] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.971 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.971 [2024-07-24 09:07:55.967095] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:17.971 [2024-07-24 09:07:55.998828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.230 [2024-07-24 09:07:56.089194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.230 [2024-07-24 09:07:56.089258] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.230 [2024-07-24 09:07:56.089290] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.230 [2024-07-24 09:07:56.089320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.230 [2024-07-24 09:07:56.089344] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.230 [2024-07-24 09:07:56.089392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.UmdSYQy2jI 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UmdSYQy2jI 00:22:18.230 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.488 [2024-07-24 09:07:56.455915] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.488 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.746 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.005 [2024-07-24 09:07:56.969340] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.005 [2024-07-24 09:07:56.969608] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.005 09:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.263 malloc0 00:22:19.263 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.521 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:19.779 [2024-07-24 09:07:57.780012] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UmdSYQy2jI 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UmdSYQy2jI' 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3807110 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3807110 /var/tmp/bdevperf.sock 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3807110 ']' 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:19.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:19.779 09:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:19.779 [2024-07-24 09:07:57.842558] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:19.779 [2024-07-24 09:07:57.842631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3807110 ] 00:22:19.779 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.779 [2024-07-24 09:07:57.874476] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:20.038 [2024-07-24 09:07:57.901932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.038 [2024-07-24 09:07:57.987229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.038 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:20.038 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:20.038 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:20.296 [2024-07-24 09:07:58.316587] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.296 [2024-07-24 09:07:58.316714] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:20.296 TLSTESTn1 00:22:20.553 09:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:20.553 Running I/O for 10 seconds... 00:22:30.520 00:22:30.520 Latency(us) 00:22:30.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.520 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:30.520 Verification LBA range: start 0x0 length 0x2000 00:22:30.520 TLSTESTn1 : 10.05 1875.99 7.33 0.00 0.00 68048.23 5971.06 68739.98 00:22:30.520 =================================================================================================================== 00:22:30.520 Total : 1875.99 7.33 0.00 0.00 68048.23 5971.06 68739.98 00:22:30.520 0 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3807110 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3807110 ']' 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3807110 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:30.520 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3807110 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3807110' 00:22:30.779 killing process with pid 3807110 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3807110 00:22:30.779 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.779 00:22:30.779 Latency(us) 00:22:30.779 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.779 =================================================================================================================== 00:22:30.779 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.779 [2024-07-24 09:08:08.643271] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3807110 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.UmdSYQy2jI 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UmdSYQy2jI 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UmdSYQy2jI 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UmdSYQy2jI 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.UmdSYQy2jI' 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3808421 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3808421 /var/tmp/bdevperf.sock 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3808421 ']' 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:30.779 09:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.038 [2024-07-24 09:08:08.922122] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:31.038 [2024-07-24 09:08:08.922205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3808421 ] 00:22:31.038 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.038 [2024-07-24 09:08:08.953543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:31.038 [2024-07-24 09:08:08.980595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.038 [2024-07-24 09:08:09.061821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.296 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:31.296 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:31.296 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:31.554 [2024-07-24 09:08:09.413515] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:31.554 [2024-07-24 09:08:09.413597] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:31.554 [2024-07-24 09:08:09.413612] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.UmdSYQy2jI 00:22:31.554 request: 00:22:31.554 { 00:22:31.554 "name": "TLSTEST", 00:22:31.555 "trtype": "tcp", 00:22:31.555 "traddr": "10.0.0.2", 00:22:31.555 "adrfam": "ipv4", 00:22:31.555 "trsvcid": "4420", 00:22:31.555 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:31.555 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:31.555 "prchk_reftag": false, 00:22:31.555 "prchk_guard": false, 00:22:31.555 "hdgst": false, 00:22:31.555 "ddgst": false, 00:22:31.555 "psk": "/tmp/tmp.UmdSYQy2jI", 00:22:31.555 "method": "bdev_nvme_attach_controller", 00:22:31.555 "req_id": 1 00:22:31.555 } 00:22:31.555 Got JSON-RPC error response 00:22:31.555 response: 00:22:31.555 { 00:22:31.555 "code": -1, 00:22:31.555 "message": "Operation not permitted" 00:22:31.555 } 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3808421 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3808421 ']' 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3808421 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808421 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808421' 00:22:31.555 killing process with pid 3808421 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3808421 00:22:31.555 Received shutdown signal, test time was about 10.000000 seconds 00:22:31.555 00:22:31.555 Latency(us) 00:22:31.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.555 =================================================================================================================== 00:22:31.555 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3808421 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3806944 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3806944 ']' 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3806944 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:31.555 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3806944 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3806944' 00:22:31.813 killing process with pid 3806944 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3806944 00:22:31.813 [2024-07-24 09:08:09.687702] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3806944 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3808565 00:22:31.813 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3808565 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3808565 ']' 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:31.814 09:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.072 [2024-07-24 09:08:09.961889] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:32.072 [2024-07-24 09:08:09.961969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.072 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.072 [2024-07-24 09:08:09.997403] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:32.072 [2024-07-24 09:08:10.030697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.072 [2024-07-24 09:08:10.124744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.072 [2024-07-24 09:08:10.124804] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.072 [2024-07-24 09:08:10.124820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:32.072 [2024-07-24 09:08:10.124834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:32.072 [2024-07-24 09:08:10.124845] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.072 [2024-07-24 09:08:10.124874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.UmdSYQy2jI 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.UmdSYQy2jI 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.UmdSYQy2jI 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UmdSYQy2jI 00:22:32.331 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.590 [2024-07-24 09:08:10.506133] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.590 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.848 09:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:33.107 [2024-07-24 09:08:11.007451] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.107 [2024-07-24 09:08:11.007671] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.107 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:33.366 malloc0 00:22:33.366 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:33.624 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:33.883 [2024-07-24 09:08:11.820501] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:33.883 [2024-07-24 09:08:11.820545] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:33.883 [2024-07-24 09:08:11.820592] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:33.883 request: 00:22:33.883 { 00:22:33.883 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:33.883 "host": "nqn.2016-06.io.spdk:host1", 00:22:33.883 "psk": "/tmp/tmp.UmdSYQy2jI", 00:22:33.883 "method": "nvmf_subsystem_add_host", 00:22:33.883 "req_id": 1 00:22:33.883 } 00:22:33.883 Got JSON-RPC error response 00:22:33.883 response: 00:22:33.883 { 00:22:33.883 "code": -32603, 00:22:33.883 "message": "Internal error" 00:22:33.883 } 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3808565 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3808565 ']' 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3808565 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:33.883 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:33.884 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808565 00:22:33.884 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:33.884 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:33.884 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808565' 00:22:33.884 killing process with pid 3808565 00:22:33.884 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3808565 00:22:33.884 09:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3808565 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.UmdSYQy2jI 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3808864 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3808864 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3808864 ']' 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.142 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.142 [2024-07-24 09:08:12.182288] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:34.142 [2024-07-24 09:08:12.182387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.142 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.142 [2024-07-24 09:08:12.219547] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:34.142 [2024-07-24 09:08:12.253041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.401 [2024-07-24 09:08:12.340867] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.402 [2024-07-24 09:08:12.340929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.402 [2024-07-24 09:08:12.340945] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.402 [2024-07-24 09:08:12.340959] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.402 [2024-07-24 09:08:12.340971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.402 [2024-07-24 09:08:12.341000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.UmdSYQy2jI 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UmdSYQy2jI 00:22:34.402 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:34.660 [2024-07-24 09:08:12.760652] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:34.919 09:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.178 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.178 [2024-07-24 09:08:13.286095] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.178 [2024-07-24 09:08:13.286322] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.435 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:35.435 malloc0 00:22:35.694 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:35.694 09:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:35.952 [2024-07-24 09:08:14.011358] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3809051 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3809051 /var/tmp/bdevperf.sock 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3809051 ']' 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.952 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.211 [2024-07-24 09:08:14.076671] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:36.211 [2024-07-24 09:08:14.076751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809051 ] 00:22:36.211 EAL: No free 2048 kB hugepages reported on node 1 00:22:36.211 [2024-07-24 09:08:14.109946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:36.211 [2024-07-24 09:08:14.137686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.211 [2024-07-24 09:08:14.220450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.211 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.211 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:36.211 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:36.469 [2024-07-24 09:08:14.551728] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:36.469 [2024-07-24 09:08:14.551875] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:36.727 TLSTESTn1 00:22:36.727 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:36.986 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:36.986 "subsystems": [ 00:22:36.986 { 00:22:36.986 "subsystem": "keyring", 00:22:36.986 "config": [] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "iobuf", 00:22:36.986 "config": [ 00:22:36.986 { 00:22:36.986 "method": "iobuf_set_options", 00:22:36.986 "params": { 00:22:36.986 "small_pool_count": 8192, 00:22:36.986 "large_pool_count": 1024, 00:22:36.986 "small_bufsize": 8192, 00:22:36.986 "large_bufsize": 135168 00:22:36.986 } 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "sock", 00:22:36.986 "config": [ 00:22:36.986 { 00:22:36.986 "method": "sock_set_default_impl", 00:22:36.986 "params": { 00:22:36.986 "impl_name": "posix" 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "sock_impl_set_options", 00:22:36.986 "params": { 00:22:36.986 "impl_name": "ssl", 00:22:36.986 "recv_buf_size": 4096, 00:22:36.986 "send_buf_size": 4096, 00:22:36.986 "enable_recv_pipe": true, 00:22:36.986 "enable_quickack": false, 00:22:36.986 "enable_placement_id": 0, 00:22:36.986 "enable_zerocopy_send_server": true, 00:22:36.986 "enable_zerocopy_send_client": false, 00:22:36.986 "zerocopy_threshold": 0, 00:22:36.986 "tls_version": 0, 00:22:36.986 "enable_ktls": false 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "sock_impl_set_options", 00:22:36.986 "params": { 00:22:36.986 "impl_name": "posix", 00:22:36.986 "recv_buf_size": 2097152, 00:22:36.986 "send_buf_size": 2097152, 00:22:36.986 "enable_recv_pipe": true, 00:22:36.986 "enable_quickack": false, 00:22:36.986 "enable_placement_id": 0, 00:22:36.986 "enable_zerocopy_send_server": true, 00:22:36.986 "enable_zerocopy_send_client": false, 00:22:36.986 "zerocopy_threshold": 0, 00:22:36.986 "tls_version": 0, 00:22:36.986 "enable_ktls": false 00:22:36.986 } 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "vmd", 00:22:36.986 "config": [] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "accel", 00:22:36.986 "config": [ 00:22:36.986 { 00:22:36.986 "method": "accel_set_options", 00:22:36.986 "params": { 00:22:36.986 "small_cache_size": 128, 00:22:36.986 "large_cache_size": 16, 00:22:36.986 "task_count": 2048, 00:22:36.986 "sequence_count": 2048, 00:22:36.986 "buf_count": 2048 00:22:36.986 } 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "bdev", 00:22:36.986 "config": [ 00:22:36.986 { 00:22:36.986 "method": "bdev_set_options", 00:22:36.986 "params": { 00:22:36.986 "bdev_io_pool_size": 65535, 00:22:36.986 "bdev_io_cache_size": 256, 00:22:36.986 "bdev_auto_examine": true, 00:22:36.986 "iobuf_small_cache_size": 128, 00:22:36.986 "iobuf_large_cache_size": 16 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "bdev_raid_set_options", 00:22:36.986 "params": { 00:22:36.986 "process_window_size_kb": 1024, 00:22:36.986 "process_max_bandwidth_mb_sec": 0 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "bdev_iscsi_set_options", 00:22:36.986 "params": { 00:22:36.986 "timeout_sec": 30 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "bdev_nvme_set_options", 00:22:36.986 "params": { 00:22:36.986 "action_on_timeout": "none", 00:22:36.986 "timeout_us": 0, 00:22:36.986 "timeout_admin_us": 0, 00:22:36.986 "keep_alive_timeout_ms": 10000, 00:22:36.986 "arbitration_burst": 0, 00:22:36.986 "low_priority_weight": 0, 00:22:36.986 "medium_priority_weight": 0, 00:22:36.986 "high_priority_weight": 0, 00:22:36.986 "nvme_adminq_poll_period_us": 10000, 00:22:36.986 "nvme_ioq_poll_period_us": 0, 00:22:36.986 "io_queue_requests": 0, 00:22:36.986 "delay_cmd_submit": true, 00:22:36.986 "transport_retry_count": 4, 00:22:36.986 "bdev_retry_count": 3, 00:22:36.986 "transport_ack_timeout": 0, 00:22:36.986 "ctrlr_loss_timeout_sec": 0, 00:22:36.986 "reconnect_delay_sec": 0, 00:22:36.986 "fast_io_fail_timeout_sec": 0, 00:22:36.986 "disable_auto_failback": false, 00:22:36.986 "generate_uuids": false, 00:22:36.986 "transport_tos": 0, 00:22:36.986 "nvme_error_stat": false, 00:22:36.986 "rdma_srq_size": 0, 00:22:36.986 "io_path_stat": false, 00:22:36.986 "allow_accel_sequence": false, 00:22:36.986 "rdma_max_cq_size": 0, 00:22:36.986 "rdma_cm_event_timeout_ms": 0, 00:22:36.986 "dhchap_digests": [ 00:22:36.986 "sha256", 00:22:36.986 "sha384", 00:22:36.986 "sha512" 00:22:36.986 ], 00:22:36.986 "dhchap_dhgroups": [ 00:22:36.986 "null", 00:22:36.986 "ffdhe2048", 00:22:36.986 "ffdhe3072", 00:22:36.986 "ffdhe4096", 00:22:36.986 "ffdhe6144", 00:22:36.986 "ffdhe8192" 00:22:36.986 ] 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "bdev_nvme_set_hotplug", 00:22:36.986 "params": { 00:22:36.986 "period_us": 100000, 00:22:36.986 "enable": false 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "bdev_malloc_create", 00:22:36.986 "params": { 00:22:36.986 "name": "malloc0", 00:22:36.986 "num_blocks": 8192, 00:22:36.986 "block_size": 4096, 00:22:36.986 "physical_block_size": 4096, 00:22:36.986 "uuid": "39ae5782-2d61-48cd-b110-28d4cbc09c6b", 00:22:36.986 "optimal_io_boundary": 0, 00:22:36.986 "md_size": 0, 00:22:36.986 "dif_type": 0, 00:22:36.986 "dif_is_head_of_md": false, 00:22:36.986 "dif_pi_format": 0 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "bdev_wait_for_examine" 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "nbd", 00:22:36.986 "config": [] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "scheduler", 00:22:36.986 "config": [ 00:22:36.986 { 00:22:36.986 "method": "framework_set_scheduler", 00:22:36.986 "params": { 00:22:36.986 "name": "static" 00:22:36.986 } 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "subsystem": "nvmf", 00:22:36.986 "config": [ 00:22:36.986 { 00:22:36.986 "method": "nvmf_set_config", 00:22:36.986 "params": { 00:22:36.986 "discovery_filter": "match_any", 00:22:36.986 "admin_cmd_passthru": { 00:22:36.986 "identify_ctrlr": false 00:22:36.986 } 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_set_max_subsystems", 00:22:36.986 "params": { 00:22:36.986 "max_subsystems": 1024 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_set_crdt", 00:22:36.986 "params": { 00:22:36.986 "crdt1": 0, 00:22:36.986 "crdt2": 0, 00:22:36.986 "crdt3": 0 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_create_transport", 00:22:36.986 "params": { 00:22:36.986 "trtype": "TCP", 00:22:36.986 "max_queue_depth": 128, 00:22:36.986 "max_io_qpairs_per_ctrlr": 127, 00:22:36.986 "in_capsule_data_size": 4096, 00:22:36.986 "max_io_size": 131072, 00:22:36.986 "io_unit_size": 131072, 00:22:36.986 "max_aq_depth": 128, 00:22:36.986 "num_shared_buffers": 511, 00:22:36.986 "buf_cache_size": 4294967295, 00:22:36.986 "dif_insert_or_strip": false, 00:22:36.986 "zcopy": false, 00:22:36.986 "c2h_success": false, 00:22:36.986 "sock_priority": 0, 00:22:36.986 "abort_timeout_sec": 1, 00:22:36.986 "ack_timeout": 0, 00:22:36.986 "data_wr_pool_size": 0 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_create_subsystem", 00:22:36.986 "params": { 00:22:36.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.986 "allow_any_host": false, 00:22:36.986 "serial_number": "SPDK00000000000001", 00:22:36.986 "model_number": "SPDK bdev Controller", 00:22:36.986 "max_namespaces": 10, 00:22:36.986 "min_cntlid": 1, 00:22:36.986 "max_cntlid": 65519, 00:22:36.986 "ana_reporting": false 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_subsystem_add_host", 00:22:36.986 "params": { 00:22:36.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.986 "host": "nqn.2016-06.io.spdk:host1", 00:22:36.986 "psk": "/tmp/tmp.UmdSYQy2jI" 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_subsystem_add_ns", 00:22:36.986 "params": { 00:22:36.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.986 "namespace": { 00:22:36.986 "nsid": 1, 00:22:36.986 "bdev_name": "malloc0", 00:22:36.986 "nguid": "39AE57822D6148CDB11028D4CBC09C6B", 00:22:36.986 "uuid": "39ae5782-2d61-48cd-b110-28d4cbc09c6b", 00:22:36.986 "no_auto_visible": false 00:22:36.986 } 00:22:36.986 } 00:22:36.986 }, 00:22:36.986 { 00:22:36.986 "method": "nvmf_subsystem_add_listener", 00:22:36.986 "params": { 00:22:36.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:36.986 "listen_address": { 00:22:36.986 "trtype": "TCP", 00:22:36.986 "adrfam": "IPv4", 00:22:36.986 "traddr": "10.0.0.2", 00:22:36.986 "trsvcid": "4420" 00:22:36.986 }, 00:22:36.986 "secure_channel": true 00:22:36.986 } 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 } 00:22:36.986 ] 00:22:36.986 }' 00:22:36.986 09:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:37.245 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:37.245 "subsystems": [ 00:22:37.245 { 00:22:37.245 "subsystem": "keyring", 00:22:37.245 "config": [] 00:22:37.245 }, 00:22:37.245 { 00:22:37.245 "subsystem": "iobuf", 00:22:37.245 "config": [ 00:22:37.245 { 00:22:37.245 "method": "iobuf_set_options", 00:22:37.245 "params": { 00:22:37.245 "small_pool_count": 8192, 00:22:37.245 "large_pool_count": 1024, 00:22:37.245 "small_bufsize": 8192, 00:22:37.245 "large_bufsize": 135168 00:22:37.245 } 00:22:37.245 } 00:22:37.245 ] 00:22:37.245 }, 00:22:37.245 { 00:22:37.245 "subsystem": "sock", 00:22:37.245 "config": [ 00:22:37.245 { 00:22:37.245 "method": "sock_set_default_impl", 00:22:37.245 "params": { 00:22:37.245 "impl_name": "posix" 00:22:37.245 } 00:22:37.245 }, 00:22:37.245 { 00:22:37.245 "method": "sock_impl_set_options", 00:22:37.245 "params": { 00:22:37.245 "impl_name": "ssl", 00:22:37.245 "recv_buf_size": 4096, 00:22:37.245 "send_buf_size": 4096, 00:22:37.245 "enable_recv_pipe": true, 00:22:37.245 "enable_quickack": false, 00:22:37.245 "enable_placement_id": 0, 00:22:37.245 "enable_zerocopy_send_server": true, 00:22:37.245 "enable_zerocopy_send_client": false, 00:22:37.245 "zerocopy_threshold": 0, 00:22:37.245 "tls_version": 0, 00:22:37.245 "enable_ktls": false 00:22:37.245 } 00:22:37.245 }, 00:22:37.245 { 00:22:37.245 "method": "sock_impl_set_options", 00:22:37.245 "params": { 00:22:37.245 "impl_name": "posix", 00:22:37.245 "recv_buf_size": 2097152, 00:22:37.245 "send_buf_size": 2097152, 00:22:37.245 "enable_recv_pipe": true, 00:22:37.245 "enable_quickack": false, 00:22:37.245 "enable_placement_id": 0, 00:22:37.245 "enable_zerocopy_send_server": true, 00:22:37.245 "enable_zerocopy_send_client": false, 00:22:37.245 "zerocopy_threshold": 0, 00:22:37.245 "tls_version": 0, 00:22:37.245 "enable_ktls": false 00:22:37.245 } 00:22:37.245 } 00:22:37.245 ] 00:22:37.245 }, 00:22:37.245 { 00:22:37.245 "subsystem": "vmd", 00:22:37.245 "config": [] 00:22:37.245 }, 00:22:37.245 { 00:22:37.245 "subsystem": "accel", 00:22:37.245 "config": [ 00:22:37.245 { 00:22:37.245 "method": "accel_set_options", 00:22:37.245 "params": { 00:22:37.245 "small_cache_size": 128, 00:22:37.245 "large_cache_size": 16, 00:22:37.245 "task_count": 2048, 00:22:37.245 "sequence_count": 2048, 00:22:37.245 "buf_count": 2048 00:22:37.245 } 00:22:37.245 } 00:22:37.245 ] 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "subsystem": "bdev", 00:22:37.246 "config": [ 00:22:37.246 { 00:22:37.246 "method": "bdev_set_options", 00:22:37.246 "params": { 00:22:37.246 "bdev_io_pool_size": 65535, 00:22:37.246 "bdev_io_cache_size": 256, 00:22:37.246 "bdev_auto_examine": true, 00:22:37.246 "iobuf_small_cache_size": 128, 00:22:37.246 "iobuf_large_cache_size": 16 00:22:37.246 } 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "method": "bdev_raid_set_options", 00:22:37.246 "params": { 00:22:37.246 "process_window_size_kb": 1024, 00:22:37.246 "process_max_bandwidth_mb_sec": 0 00:22:37.246 } 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "method": "bdev_iscsi_set_options", 00:22:37.246 "params": { 00:22:37.246 "timeout_sec": 30 00:22:37.246 } 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "method": "bdev_nvme_set_options", 00:22:37.246 "params": { 00:22:37.246 "action_on_timeout": "none", 00:22:37.246 "timeout_us": 0, 00:22:37.246 "timeout_admin_us": 0, 00:22:37.246 "keep_alive_timeout_ms": 10000, 00:22:37.246 "arbitration_burst": 0, 00:22:37.246 "low_priority_weight": 0, 00:22:37.246 "medium_priority_weight": 0, 00:22:37.246 "high_priority_weight": 0, 00:22:37.246 "nvme_adminq_poll_period_us": 10000, 00:22:37.246 "nvme_ioq_poll_period_us": 0, 00:22:37.246 "io_queue_requests": 512, 00:22:37.246 "delay_cmd_submit": true, 00:22:37.246 "transport_retry_count": 4, 00:22:37.246 "bdev_retry_count": 3, 00:22:37.246 "transport_ack_timeout": 0, 00:22:37.246 "ctrlr_loss_timeout_sec": 0, 00:22:37.246 "reconnect_delay_sec": 0, 00:22:37.246 "fast_io_fail_timeout_sec": 0, 00:22:37.246 "disable_auto_failback": false, 00:22:37.246 "generate_uuids": false, 00:22:37.246 "transport_tos": 0, 00:22:37.246 "nvme_error_stat": false, 00:22:37.246 "rdma_srq_size": 0, 00:22:37.246 "io_path_stat": false, 00:22:37.246 "allow_accel_sequence": false, 00:22:37.246 "rdma_max_cq_size": 0, 00:22:37.246 "rdma_cm_event_timeout_ms": 0, 00:22:37.246 "dhchap_digests": [ 00:22:37.246 "sha256", 00:22:37.246 "sha384", 00:22:37.246 "sha512" 00:22:37.246 ], 00:22:37.246 "dhchap_dhgroups": [ 00:22:37.246 "null", 00:22:37.246 "ffdhe2048", 00:22:37.246 "ffdhe3072", 00:22:37.246 "ffdhe4096", 00:22:37.246 "ffdhe6144", 00:22:37.246 "ffdhe8192" 00:22:37.246 ] 00:22:37.246 } 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "method": "bdev_nvme_attach_controller", 00:22:37.246 "params": { 00:22:37.246 "name": "TLSTEST", 00:22:37.246 "trtype": "TCP", 00:22:37.246 "adrfam": "IPv4", 00:22:37.246 "traddr": "10.0.0.2", 00:22:37.246 "trsvcid": "4420", 00:22:37.246 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.246 "prchk_reftag": false, 00:22:37.246 "prchk_guard": false, 00:22:37.246 "ctrlr_loss_timeout_sec": 0, 00:22:37.246 "reconnect_delay_sec": 0, 00:22:37.246 "fast_io_fail_timeout_sec": 0, 00:22:37.246 "psk": "/tmp/tmp.UmdSYQy2jI", 00:22:37.246 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.246 "hdgst": false, 00:22:37.246 "ddgst": false 00:22:37.246 } 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "method": "bdev_nvme_set_hotplug", 00:22:37.246 "params": { 00:22:37.246 "period_us": 100000, 00:22:37.246 "enable": false 00:22:37.246 } 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "method": "bdev_wait_for_examine" 00:22:37.246 } 00:22:37.246 ] 00:22:37.246 }, 00:22:37.246 { 00:22:37.246 "subsystem": "nbd", 00:22:37.246 "config": [] 00:22:37.246 } 00:22:37.246 ] 00:22:37.246 }' 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3809051 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3809051 ']' 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3809051 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809051 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809051' 00:22:37.246 killing process with pid 3809051 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3809051 00:22:37.246 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.246 00:22:37.246 Latency(us) 00:22:37.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.246 =================================================================================================================== 00:22:37.246 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.246 [2024-07-24 09:08:15.300828] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.246 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3809051 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3808864 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3808864 ']' 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3808864 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3808864 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3808864' 00:22:37.505 killing process with pid 3808864 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3808864 00:22:37.505 [2024-07-24 09:08:15.551207] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:37.505 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3808864 00:22:37.765 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:37.765 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:37.765 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:37.765 "subsystems": [ 00:22:37.765 { 00:22:37.765 "subsystem": "keyring", 00:22:37.765 "config": [] 00:22:37.765 }, 00:22:37.765 { 00:22:37.765 "subsystem": "iobuf", 00:22:37.765 "config": [ 00:22:37.765 { 00:22:37.765 "method": "iobuf_set_options", 00:22:37.765 "params": { 00:22:37.765 "small_pool_count": 8192, 00:22:37.765 "large_pool_count": 1024, 00:22:37.765 "small_bufsize": 8192, 00:22:37.765 "large_bufsize": 135168 00:22:37.765 } 00:22:37.765 } 00:22:37.765 ] 00:22:37.765 }, 00:22:37.765 { 00:22:37.765 "subsystem": "sock", 00:22:37.765 "config": [ 00:22:37.765 { 00:22:37.765 "method": "sock_set_default_impl", 00:22:37.765 "params": { 00:22:37.765 "impl_name": "posix" 00:22:37.765 } 00:22:37.765 }, 00:22:37.765 { 00:22:37.765 "method": "sock_impl_set_options", 00:22:37.765 "params": { 00:22:37.765 "impl_name": "ssl", 00:22:37.765 "recv_buf_size": 4096, 00:22:37.765 "send_buf_size": 4096, 00:22:37.765 "enable_recv_pipe": true, 00:22:37.765 "enable_quickack": false, 00:22:37.765 "enable_placement_id": 0, 00:22:37.765 "enable_zerocopy_send_server": true, 00:22:37.765 "enable_zerocopy_send_client": false, 00:22:37.765 "zerocopy_threshold": 0, 00:22:37.765 "tls_version": 0, 00:22:37.765 "enable_ktls": false 00:22:37.765 } 00:22:37.765 }, 00:22:37.765 { 00:22:37.765 "method": "sock_impl_set_options", 00:22:37.765 "params": { 00:22:37.765 "impl_name": "posix", 00:22:37.765 "recv_buf_size": 2097152, 00:22:37.765 "send_buf_size": 2097152, 00:22:37.765 "enable_recv_pipe": true, 00:22:37.765 "enable_quickack": false, 00:22:37.765 "enable_placement_id": 0, 00:22:37.765 "enable_zerocopy_send_server": true, 00:22:37.765 "enable_zerocopy_send_client": false, 00:22:37.765 "zerocopy_threshold": 0, 00:22:37.765 "tls_version": 0, 00:22:37.765 "enable_ktls": false 00:22:37.765 } 00:22:37.765 } 00:22:37.765 ] 00:22:37.765 }, 00:22:37.765 { 00:22:37.765 "subsystem": "vmd", 00:22:37.765 "config": [] 00:22:37.765 }, 00:22:37.765 { 00:22:37.765 "subsystem": "accel", 00:22:37.765 "config": [ 00:22:37.765 { 00:22:37.765 "method": "accel_set_options", 00:22:37.766 "params": { 00:22:37.766 "small_cache_size": 128, 00:22:37.766 "large_cache_size": 16, 00:22:37.766 "task_count": 2048, 00:22:37.766 "sequence_count": 2048, 00:22:37.766 "buf_count": 2048 00:22:37.766 } 00:22:37.766 } 00:22:37.766 ] 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "subsystem": "bdev", 00:22:37.766 "config": [ 00:22:37.766 { 00:22:37.766 "method": "bdev_set_options", 00:22:37.766 "params": { 00:22:37.766 "bdev_io_pool_size": 65535, 00:22:37.766 "bdev_io_cache_size": 256, 00:22:37.766 "bdev_auto_examine": true, 00:22:37.766 "iobuf_small_cache_size": 128, 00:22:37.766 "iobuf_large_cache_size": 16 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "bdev_raid_set_options", 00:22:37.766 "params": { 00:22:37.766 "process_window_size_kb": 1024, 00:22:37.766 "process_max_bandwidth_mb_sec": 0 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "bdev_iscsi_set_options", 00:22:37.766 "params": { 00:22:37.766 "timeout_sec": 30 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "bdev_nvme_set_options", 00:22:37.766 "params": { 00:22:37.766 "action_on_timeout": "none", 00:22:37.766 "timeout_us": 0, 00:22:37.766 "timeout_admin_us": 0, 00:22:37.766 "keep_alive_timeout_ms": 10000, 00:22:37.766 "arbitration_burst": 0, 00:22:37.766 "low_priority_weight": 0, 00:22:37.766 "medium_priority_weight": 0, 00:22:37.766 "high_priority_weight": 0, 00:22:37.766 "nvme_adminq_poll_period_us": 10000, 00:22:37.766 "nvme_ioq_poll_period_us": 0, 00:22:37.766 "io_queue_requests": 0, 00:22:37.766 "delay_cmd_submit": true, 00:22:37.766 "transport_retry_count": 4, 00:22:37.766 "bdev_retry_count": 3, 00:22:37.766 "transport_ack_timeout": 0, 00:22:37.766 "ctrlr_loss_timeout_sec": 0, 00:22:37.766 "reconnect_delay_sec": 0, 00:22:37.766 "fast_io_fail_timeout_sec": 0, 00:22:37.766 "disable_auto_failback": false, 00:22:37.766 "generate_uuids": false, 00:22:37.766 "transport_tos": 0, 00:22:37.766 "nvme_error_stat": false, 00:22:37.766 "rdma_srq_size": 0, 00:22:37.766 "io_path_stat": false, 00:22:37.766 "allow_accel_sequence": false, 00:22:37.766 "rdma_max_cq_size": 0, 00:22:37.766 "rdma_cm_event_timeout_ms": 0, 00:22:37.766 "dhchap_digests": [ 00:22:37.766 "sha256", 00:22:37.766 "sha384", 00:22:37.766 "sha512" 00:22:37.766 ], 00:22:37.766 "dhchap_dhgroups": [ 00:22:37.766 "null", 00:22:37.766 "ffdhe2048", 00:22:37.766 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:37.766 "ffdhe3072", 00:22:37.766 "ffdhe4096", 00:22:37.766 "ffdhe6144", 00:22:37.766 "ffdhe8192" 00:22:37.766 ] 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "bdev_nvme_set_hotplug", 00:22:37.766 "params": { 00:22:37.766 "period_us": 100000, 00:22:37.766 "enable": false 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "bdev_malloc_create", 00:22:37.766 "params": { 00:22:37.766 "name": "malloc0", 00:22:37.766 "num_blocks": 8192, 00:22:37.766 "block_size": 4096, 00:22:37.766 "physical_block_size": 4096, 00:22:37.766 "uuid": "39ae5782-2d61-48cd-b110-28d4cbc09c6b", 00:22:37.766 "optimal_io_boundary": 0, 00:22:37.766 "md_size": 0, 00:22:37.766 "dif_type": 0, 00:22:37.766 "dif_is_head_of_md": false, 00:22:37.766 "dif_pi_format": 0 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "bdev_wait_for_examine" 00:22:37.766 } 00:22:37.766 ] 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "subsystem": "nbd", 00:22:37.766 "config": [] 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "subsystem": "scheduler", 00:22:37.766 "config": [ 00:22:37.766 { 00:22:37.766 "method": "framework_set_scheduler", 00:22:37.766 "params": { 00:22:37.766 "name": "static" 00:22:37.766 } 00:22:37.766 } 00:22:37.766 ] 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "subsystem": "nvmf", 00:22:37.766 "config": [ 00:22:37.766 { 00:22:37.766 "method": "nvmf_set_config", 00:22:37.766 "params": { 00:22:37.766 "discovery_filter": "match_any", 00:22:37.766 "admin_cmd_passthru": { 00:22:37.766 "identify_ctrlr": false 00:22:37.766 } 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "nvmf_set_max_subsystems", 00:22:37.766 "params": { 00:22:37.766 "max_subsystems": 1024 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "nvmf_set_crdt", 00:22:37.766 "params": { 00:22:37.766 "crdt1": 0, 00:22:37.766 "crdt2": 0, 00:22:37.766 "crdt3": 0 00:22:37.766 } 00:22:37.766 }, 00:22:37.766 { 00:22:37.766 "method": "nvmf_create_transport", 00:22:37.766 "params": { 00:22:37.766 "trtype": "TCP", 00:22:37.766 "max_queue_depth": 128, 00:22:37.766 "max_io_qpairs_per_ctrlr": 127, 00:22:37.766 "in_capsule_data_size": 4096, 00:22:37.767 "max_io_size": 131072, 00:22:37.767 "io_unit_size": 131072, 00:22:37.767 "max_aq_depth": 128, 00:22:37.767 "num_shared_buffers": 511, 00:22:37.767 "buf_cache_size": 4294967295, 00:22:37.767 "dif_insert_or_strip": false, 00:22:37.767 "zcopy": false, 00:22:37.767 "c2h_success": false, 00:22:37.767 "sock_priority": 0, 00:22:37.767 "abort_timeout_sec": 1, 00:22:37.767 "ack_timeout": 0, 00:22:37.767 "data_wr_pool_size": 0 00:22:37.767 } 00:22:37.767 }, 00:22:37.767 { 00:22:37.767 "method": "nvmf_create_subsystem", 00:22:37.767 "params": { 00:22:37.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.767 "allow_any_host": false, 00:22:37.767 "serial_number": "SPDK00000000000001", 00:22:37.767 "model_number": "SPDK bdev Controller", 00:22:37.767 "max_namespaces": 10, 00:22:37.767 "min_cntlid": 1, 00:22:37.767 "max_cntlid": 65519, 00:22:37.767 "ana_reporting": false 00:22:37.767 } 00:22:37.767 }, 00:22:37.767 { 00:22:37.767 "method": "nvmf_subsystem_add_host", 00:22:37.767 "params": { 00:22:37.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.767 "host": "nqn.2016-06.io.spdk:host1", 00:22:37.767 "psk": "/tmp/tmp.UmdSYQy2jI" 00:22:37.767 } 00:22:37.767 }, 00:22:37.767 { 00:22:37.767 "method": "nvmf_subsystem_add_ns", 00:22:37.767 "params": { 00:22:37.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.767 "namespace": { 00:22:37.767 "nsid": 1, 00:22:37.767 "bdev_name": "malloc0", 00:22:37.767 "nguid": "39AE57822D6148CDB11028D4CBC09C6B", 00:22:37.767 "uuid": "39ae5782-2d61-48cd-b110-28d4cbc09c6b", 00:22:37.767 "no_auto_visible": false 00:22:37.767 } 00:22:37.767 } 00:22:37.767 }, 00:22:37.767 { 00:22:37.767 "method": "nvmf_subsystem_add_listener", 00:22:37.767 "params": { 00:22:37.767 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.767 "listen_address": { 00:22:37.767 "trtype": "TCP", 00:22:37.767 "adrfam": "IPv4", 00:22:37.767 "traddr": "10.0.0.2", 00:22:37.767 "trsvcid": "4420" 00:22:37.767 }, 00:22:37.767 "secure_channel": true 00:22:37.767 } 00:22:37.767 } 00:22:37.767 ] 00:22:37.767 } 00:22:37.767 ] 00:22:37.767 }' 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3809298 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3809298 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3809298 ']' 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:37.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.767 09:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.767 [2024-07-24 09:08:15.860715] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:37.767 [2024-07-24 09:08:15.860817] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.027 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.027 [2024-07-24 09:08:15.897720] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.027 [2024-07-24 09:08:15.929242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.027 [2024-07-24 09:08:16.017426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.027 [2024-07-24 09:08:16.017502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.027 [2024-07-24 09:08:16.017519] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.027 [2024-07-24 09:08:16.017533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.027 [2024-07-24 09:08:16.017545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.027 [2024-07-24 09:08:16.017636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.285 [2024-07-24 09:08:16.253170] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.285 [2024-07-24 09:08:16.282850] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:38.285 [2024-07-24 09:08:16.298918] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.285 [2024-07-24 09:08:16.299160] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.852 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.852 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:38.852 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.852 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3809451 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3809451 /var/tmp/bdevperf.sock 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3809451 ']' 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.853 09:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:38.853 "subsystems": [ 00:22:38.853 { 00:22:38.853 "subsystem": "keyring", 00:22:38.853 "config": [] 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "subsystem": "iobuf", 00:22:38.853 "config": [ 00:22:38.853 { 00:22:38.853 "method": "iobuf_set_options", 00:22:38.853 "params": { 00:22:38.853 "small_pool_count": 8192, 00:22:38.853 "large_pool_count": 1024, 00:22:38.853 "small_bufsize": 8192, 00:22:38.853 "large_bufsize": 135168 00:22:38.853 } 00:22:38.853 } 00:22:38.853 ] 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "subsystem": "sock", 00:22:38.853 "config": [ 00:22:38.853 { 00:22:38.853 "method": "sock_set_default_impl", 00:22:38.853 "params": { 00:22:38.853 "impl_name": "posix" 00:22:38.853 } 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "method": "sock_impl_set_options", 00:22:38.853 "params": { 00:22:38.853 "impl_name": "ssl", 00:22:38.853 "recv_buf_size": 4096, 00:22:38.853 "send_buf_size": 4096, 00:22:38.853 "enable_recv_pipe": true, 00:22:38.853 "enable_quickack": false, 00:22:38.853 "enable_placement_id": 0, 00:22:38.853 "enable_zerocopy_send_server": true, 00:22:38.853 "enable_zerocopy_send_client": false, 00:22:38.853 "zerocopy_threshold": 0, 00:22:38.853 "tls_version": 0, 00:22:38.853 "enable_ktls": false 00:22:38.853 } 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "method": "sock_impl_set_options", 00:22:38.853 "params": { 00:22:38.853 "impl_name": "posix", 00:22:38.853 "recv_buf_size": 2097152, 00:22:38.853 "send_buf_size": 2097152, 00:22:38.853 "enable_recv_pipe": true, 00:22:38.853 "enable_quickack": false, 00:22:38.853 "enable_placement_id": 0, 00:22:38.853 "enable_zerocopy_send_server": true, 00:22:38.853 "enable_zerocopy_send_client": false, 00:22:38.853 "zerocopy_threshold": 0, 00:22:38.853 "tls_version": 0, 00:22:38.853 "enable_ktls": false 00:22:38.853 } 00:22:38.853 } 00:22:38.853 ] 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "subsystem": "vmd", 00:22:38.853 "config": [] 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "subsystem": "accel", 00:22:38.853 "config": [ 00:22:38.853 { 00:22:38.853 "method": "accel_set_options", 00:22:38.853 "params": { 00:22:38.853 "small_cache_size": 128, 00:22:38.853 "large_cache_size": 16, 00:22:38.853 "task_count": 2048, 00:22:38.853 "sequence_count": 2048, 00:22:38.853 "buf_count": 2048 00:22:38.853 } 00:22:38.853 } 00:22:38.853 ] 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "subsystem": "bdev", 00:22:38.853 "config": [ 00:22:38.853 { 00:22:38.853 "method": "bdev_set_options", 00:22:38.853 "params": { 00:22:38.853 "bdev_io_pool_size": 65535, 00:22:38.853 "bdev_io_cache_size": 256, 00:22:38.853 "bdev_auto_examine": true, 00:22:38.853 "iobuf_small_cache_size": 128, 00:22:38.853 "iobuf_large_cache_size": 16 00:22:38.853 } 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "method": "bdev_raid_set_options", 00:22:38.853 "params": { 00:22:38.853 "process_window_size_kb": 1024, 00:22:38.853 "process_max_bandwidth_mb_sec": 0 00:22:38.853 } 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "method": "bdev_iscsi_set_options", 00:22:38.853 "params": { 00:22:38.853 "timeout_sec": 30 00:22:38.853 } 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "method": "bdev_nvme_set_options", 00:22:38.853 "params": { 00:22:38.853 "action_on_timeout": "none", 00:22:38.853 "timeout_us": 0, 00:22:38.853 "timeout_admin_us": 0, 00:22:38.853 "keep_alive_timeout_ms": 10000, 00:22:38.853 "arbitration_burst": 0, 00:22:38.853 "low_priority_weight": 0, 00:22:38.853 "medium_priority_weight": 0, 00:22:38.853 "high_priority_weight": 0, 00:22:38.853 "nvme_adminq_poll_period_us": 10000, 00:22:38.853 "nvme_ioq_poll_period_us": 0, 00:22:38.853 "io_queue_requests": 512, 00:22:38.853 "delay_cmd_submit": true, 00:22:38.853 "transport_retry_count": 4, 00:22:38.853 "bdev_retry_count": 3, 00:22:38.853 "transport_ack_timeout": 0, 00:22:38.853 "ctrlr_loss_timeout_sec": 0, 00:22:38.853 "reconnect_delay_sec": 0, 00:22:38.853 "fast_io_fail_timeout_sec": 0, 00:22:38.853 "disable_auto_failback": false, 00:22:38.853 "generate_uuids": false, 00:22:38.853 "transport_tos": 0, 00:22:38.853 "nvme_error_stat": false, 00:22:38.853 "rdma_srq_size": 0, 00:22:38.853 "io_path_stat": false, 00:22:38.853 "allow_accel_sequence": false, 00:22:38.853 "rdma_max_cq_size": 0, 00:22:38.853 "rdma_cm_event_timeout_ms": 0, 00:22:38.853 "dhchap_digests": [ 00:22:38.853 "sha256", 00:22:38.853 "sha384", 00:22:38.853 "sha512" 00:22:38.853 ], 00:22:38.853 "dhchap_dhgroups": [ 00:22:38.853 "null", 00:22:38.853 "ffdhe2048", 00:22:38.853 "ffdhe3072", 00:22:38.853 "ffdhe4096", 00:22:38.853 "ffdhe6144", 00:22:38.853 "ffdhe8192" 00:22:38.853 ] 00:22:38.853 } 00:22:38.853 }, 00:22:38.853 { 00:22:38.853 "method": "bdev_nvme_attach_controller", 00:22:38.853 "params": { 00:22:38.853 "name": "TLSTEST", 00:22:38.853 "trtype": "TCP", 00:22:38.853 "adrfam": "IPv4", 00:22:38.853 "traddr": "10.0.0.2", 00:22:38.853 "trsvcid": "4420", 00:22:38.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.854 "prchk_reftag": false, 00:22:38.854 "prchk_guard": false, 00:22:38.854 "ctrlr_loss_timeout_sec": 0, 00:22:38.854 "reconnect_delay_sec": 0, 00:22:38.854 "fast_io_fail_timeout_sec": 0, 00:22:38.854 "psk": "/tmp/tmp.UmdSYQy2jI", 00:22:38.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.854 "hdgst": false, 00:22:38.854 "ddgst": false 00:22:38.854 } 00:22:38.854 }, 00:22:38.854 { 00:22:38.854 "method": "bdev_nvme_set_hotplug", 00:22:38.854 "params": { 00:22:38.854 "period_us": 100000, 00:22:38.854 "enable": false 00:22:38.854 } 00:22:38.854 }, 00:22:38.854 { 00:22:38.854 "method": "bdev_wait_for_examine" 00:22:38.854 } 00:22:38.854 ] 00:22:38.854 }, 00:22:38.854 { 00:22:38.854 "subsystem": "nbd", 00:22:38.854 "config": [] 00:22:38.854 } 00:22:38.854 ] 00:22:38.854 }' 00:22:38.854 [2024-07-24 09:08:16.848296] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:38.854 [2024-07-24 09:08:16.848375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3809451 ] 00:22:38.854 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.854 [2024-07-24 09:08:16.879708] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:38.854 [2024-07-24 09:08:16.907067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.112 [2024-07-24 09:08:16.993043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.112 [2024-07-24 09:08:17.160914] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:39.112 [2024-07-24 09:08:17.161082] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:40.128 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.128 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:40.128 09:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:40.128 Running I/O for 10 seconds... 00:22:50.094 00:22:50.094 Latency(us) 00:22:50.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.094 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:50.094 Verification LBA range: start 0x0 length 0x2000 00:22:50.094 TLSTESTn1 : 10.03 3629.60 14.18 0.00 0.00 35189.59 11359.57 62137.84 00:22:50.094 =================================================================================================================== 00:22:50.094 Total : 3629.60 14.18 0.00 0.00 35189.59 11359.57 62137.84 00:22:50.094 0 00:22:50.094 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:50.094 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3809451 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3809451 ']' 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3809451 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809451 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809451' 00:22:50.095 killing process with pid 3809451 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3809451 00:22:50.095 Received shutdown signal, test time was about 10.000000 seconds 00:22:50.095 00:22:50.095 Latency(us) 00:22:50.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.095 =================================================================================================================== 00:22:50.095 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.095 [2024-07-24 09:08:28.071945] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:50.095 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3809451 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3809298 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3809298 ']' 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3809298 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3809298 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3809298' 00:22:50.353 killing process with pid 3809298 00:22:50.353 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3809298 00:22:50.354 [2024-07-24 09:08:28.328975] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:50.354 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3809298 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3810781 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3810781 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3810781 ']' 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:50.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:50.612 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.612 [2024-07-24 09:08:28.638440] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:50.612 [2024-07-24 09:08:28.638552] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:50.612 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.612 [2024-07-24 09:08:28.675471] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:50.612 [2024-07-24 09:08:28.707443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:50.876 [2024-07-24 09:08:28.795616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:50.877 [2024-07-24 09:08:28.795680] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:50.877 [2024-07-24 09:08:28.795707] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:50.877 [2024-07-24 09:08:28.795721] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:50.877 [2024-07-24 09:08:28.795732] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:50.877 [2024-07-24 09:08:28.795772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.UmdSYQy2jI 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.UmdSYQy2jI 00:22:50.877 09:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:51.144 [2024-07-24 09:08:29.171731] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:51.145 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:51.403 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:51.660 [2024-07-24 09:08:29.669065] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:51.660 [2024-07-24 09:08:29.669305] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:51.660 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:51.918 malloc0 00:22:51.918 09:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:52.176 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UmdSYQy2jI 00:22:52.434 [2024-07-24 09:08:30.432022] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3811063 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3811063 /var/tmp/bdevperf.sock 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3811063 ']' 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.434 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:52.434 [2024-07-24 09:08:30.488947] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:52.434 [2024-07-24 09:08:30.489015] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811063 ] 00:22:52.434 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.434 [2024-07-24 09:08:30.520671] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:52.693 [2024-07-24 09:08:30.550702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.693 [2024-07-24 09:08:30.641699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.693 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:52.693 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:52.693 09:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UmdSYQy2jI 00:22:52.951 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:53.209 [2024-07-24 09:08:31.260047] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.467 nvme0n1 00:22:53.467 09:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.467 Running I/O for 1 seconds... 00:22:54.401 00:22:54.401 Latency(us) 00:22:54.401 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.401 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:54.401 Verification LBA range: start 0x0 length 0x2000 00:22:54.401 nvme0n1 : 1.03 3418.59 13.35 0.00 0.00 36985.43 10048.85 48351.00 00:22:54.401 =================================================================================================================== 00:22:54.401 Total : 3418.59 13.35 0.00 0.00 36985.43 10048.85 48351.00 00:22:54.401 0 00:22:54.401 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3811063 00:22:54.401 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3811063 ']' 00:22:54.401 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3811063 00:22:54.401 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:54.401 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.401 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811063 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811063' 00:22:54.660 killing process with pid 3811063 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3811063 00:22:54.660 Received shutdown signal, test time was about 1.000000 seconds 00:22:54.660 00:22:54.660 Latency(us) 00:22:54.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.660 =================================================================================================================== 00:22:54.660 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3811063 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3810781 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3810781 ']' 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3810781 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:54.660 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3810781 00:22:54.918 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:54.918 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:54.918 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3810781' 00:22:54.918 killing process with pid 3810781 00:22:54.918 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3810781 00:22:54.918 [2024-07-24 09:08:32.796784] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:54.918 09:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3810781 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3811342 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3811342 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3811342 ']' 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.177 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.177 [2024-07-24 09:08:33.097524] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:55.177 [2024-07-24 09:08:33.097620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.177 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.177 [2024-07-24 09:08:33.133826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:55.177 [2024-07-24 09:08:33.160470] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.177 [2024-07-24 09:08:33.244404] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.177 [2024-07-24 09:08:33.244455] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.177 [2024-07-24 09:08:33.244478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.177 [2024-07-24 09:08:33.244489] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.177 [2024-07-24 09:08:33.244499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.177 [2024-07-24 09:08:33.244530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.434 [2024-07-24 09:08:33.387222] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.434 malloc0 00:22:55.434 [2024-07-24 09:08:33.418917] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.434 [2024-07-24 09:08:33.434263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3811438 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3811438 /var/tmp/bdevperf.sock 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3811438 ']' 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:55.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:55.434 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.434 [2024-07-24 09:08:33.505001] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:55.434 [2024-07-24 09:08:33.505077] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811438 ] 00:22:55.434 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.434 [2024-07-24 09:08:33.537972] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:55.690 [2024-07-24 09:08:33.566920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.690 [2024-07-24 09:08:33.655819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.690 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.690 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:55.690 09:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UmdSYQy2jI 00:22:55.947 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:56.511 [2024-07-24 09:08:34.331185] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:56.511 nvme0n1 00:22:56.511 09:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:56.511 Running I/O for 1 seconds... 00:22:57.884 00:22:57.884 Latency(us) 00:22:57.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:57.884 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:57.884 Verification LBA range: start 0x0 length 0x2000 00:22:57.884 nvme0n1 : 1.03 3382.45 13.21 0.00 0.00 37407.35 6092.42 70681.79 00:22:57.884 =================================================================================================================== 00:22:57.884 Total : 3382.45 13.21 0.00 0.00 37407.35 6092.42 70681.79 00:22:57.884 0 00:22:57.884 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:22:57.884 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:57.884 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:57.884 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:57.884 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:22:57.884 "subsystems": [ 00:22:57.884 { 00:22:57.884 "subsystem": "keyring", 00:22:57.884 "config": [ 00:22:57.884 { 00:22:57.884 "method": "keyring_file_add_key", 00:22:57.885 "params": { 00:22:57.885 "name": "key0", 00:22:57.885 "path": "/tmp/tmp.UmdSYQy2jI" 00:22:57.885 } 00:22:57.885 } 00:22:57.885 ] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "iobuf", 00:22:57.885 "config": [ 00:22:57.885 { 00:22:57.885 "method": "iobuf_set_options", 00:22:57.885 "params": { 00:22:57.885 "small_pool_count": 8192, 00:22:57.885 "large_pool_count": 1024, 00:22:57.885 "small_bufsize": 8192, 00:22:57.885 "large_bufsize": 135168 00:22:57.885 } 00:22:57.885 } 00:22:57.885 ] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "sock", 00:22:57.885 "config": [ 00:22:57.885 { 00:22:57.885 "method": "sock_set_default_impl", 00:22:57.885 "params": { 00:22:57.885 "impl_name": "posix" 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "sock_impl_set_options", 00:22:57.885 "params": { 00:22:57.885 "impl_name": "ssl", 00:22:57.885 "recv_buf_size": 4096, 00:22:57.885 "send_buf_size": 4096, 00:22:57.885 "enable_recv_pipe": true, 00:22:57.885 "enable_quickack": false, 00:22:57.885 "enable_placement_id": 0, 00:22:57.885 "enable_zerocopy_send_server": true, 00:22:57.885 "enable_zerocopy_send_client": false, 00:22:57.885 "zerocopy_threshold": 0, 00:22:57.885 "tls_version": 0, 00:22:57.885 "enable_ktls": false 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "sock_impl_set_options", 00:22:57.885 "params": { 00:22:57.885 "impl_name": "posix", 00:22:57.885 "recv_buf_size": 2097152, 00:22:57.885 "send_buf_size": 2097152, 00:22:57.885 "enable_recv_pipe": true, 00:22:57.885 "enable_quickack": false, 00:22:57.885 "enable_placement_id": 0, 00:22:57.885 "enable_zerocopy_send_server": true, 00:22:57.885 "enable_zerocopy_send_client": false, 00:22:57.885 "zerocopy_threshold": 0, 00:22:57.885 "tls_version": 0, 00:22:57.885 "enable_ktls": false 00:22:57.885 } 00:22:57.885 } 00:22:57.885 ] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "vmd", 00:22:57.885 "config": [] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "accel", 00:22:57.885 "config": [ 00:22:57.885 { 00:22:57.885 "method": "accel_set_options", 00:22:57.885 "params": { 00:22:57.885 "small_cache_size": 128, 00:22:57.885 "large_cache_size": 16, 00:22:57.885 "task_count": 2048, 00:22:57.885 "sequence_count": 2048, 00:22:57.885 "buf_count": 2048 00:22:57.885 } 00:22:57.885 } 00:22:57.885 ] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "bdev", 00:22:57.885 "config": [ 00:22:57.885 { 00:22:57.885 "method": "bdev_set_options", 00:22:57.885 "params": { 00:22:57.885 "bdev_io_pool_size": 65535, 00:22:57.885 "bdev_io_cache_size": 256, 00:22:57.885 "bdev_auto_examine": true, 00:22:57.885 "iobuf_small_cache_size": 128, 00:22:57.885 "iobuf_large_cache_size": 16 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "bdev_raid_set_options", 00:22:57.885 "params": { 00:22:57.885 "process_window_size_kb": 1024, 00:22:57.885 "process_max_bandwidth_mb_sec": 0 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "bdev_iscsi_set_options", 00:22:57.885 "params": { 00:22:57.885 "timeout_sec": 30 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "bdev_nvme_set_options", 00:22:57.885 "params": { 00:22:57.885 "action_on_timeout": "none", 00:22:57.885 "timeout_us": 0, 00:22:57.885 "timeout_admin_us": 0, 00:22:57.885 "keep_alive_timeout_ms": 10000, 00:22:57.885 "arbitration_burst": 0, 00:22:57.885 "low_priority_weight": 0, 00:22:57.885 "medium_priority_weight": 0, 00:22:57.885 "high_priority_weight": 0, 00:22:57.885 "nvme_adminq_poll_period_us": 10000, 00:22:57.885 "nvme_ioq_poll_period_us": 0, 00:22:57.885 "io_queue_requests": 0, 00:22:57.885 "delay_cmd_submit": true, 00:22:57.885 "transport_retry_count": 4, 00:22:57.885 "bdev_retry_count": 3, 00:22:57.885 "transport_ack_timeout": 0, 00:22:57.885 "ctrlr_loss_timeout_sec": 0, 00:22:57.885 "reconnect_delay_sec": 0, 00:22:57.885 "fast_io_fail_timeout_sec": 0, 00:22:57.885 "disable_auto_failback": false, 00:22:57.885 "generate_uuids": false, 00:22:57.885 "transport_tos": 0, 00:22:57.885 "nvme_error_stat": false, 00:22:57.885 "rdma_srq_size": 0, 00:22:57.885 "io_path_stat": false, 00:22:57.885 "allow_accel_sequence": false, 00:22:57.885 "rdma_max_cq_size": 0, 00:22:57.885 "rdma_cm_event_timeout_ms": 0, 00:22:57.885 "dhchap_digests": [ 00:22:57.885 "sha256", 00:22:57.885 "sha384", 00:22:57.885 "sha512" 00:22:57.885 ], 00:22:57.885 "dhchap_dhgroups": [ 00:22:57.885 "null", 00:22:57.885 "ffdhe2048", 00:22:57.885 "ffdhe3072", 00:22:57.885 "ffdhe4096", 00:22:57.885 "ffdhe6144", 00:22:57.885 "ffdhe8192" 00:22:57.885 ] 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "bdev_nvme_set_hotplug", 00:22:57.885 "params": { 00:22:57.885 "period_us": 100000, 00:22:57.885 "enable": false 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "bdev_malloc_create", 00:22:57.885 "params": { 00:22:57.885 "name": "malloc0", 00:22:57.885 "num_blocks": 8192, 00:22:57.885 "block_size": 4096, 00:22:57.885 "physical_block_size": 4096, 00:22:57.885 "uuid": "01fdbe6b-0d97-43f8-9a2c-bf4cbb3527e5", 00:22:57.885 "optimal_io_boundary": 0, 00:22:57.885 "md_size": 0, 00:22:57.885 "dif_type": 0, 00:22:57.885 "dif_is_head_of_md": false, 00:22:57.885 "dif_pi_format": 0 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "bdev_wait_for_examine" 00:22:57.885 } 00:22:57.885 ] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "nbd", 00:22:57.885 "config": [] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "scheduler", 00:22:57.885 "config": [ 00:22:57.885 { 00:22:57.885 "method": "framework_set_scheduler", 00:22:57.885 "params": { 00:22:57.885 "name": "static" 00:22:57.885 } 00:22:57.885 } 00:22:57.885 ] 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "subsystem": "nvmf", 00:22:57.885 "config": [ 00:22:57.885 { 00:22:57.885 "method": "nvmf_set_config", 00:22:57.885 "params": { 00:22:57.885 "discovery_filter": "match_any", 00:22:57.885 "admin_cmd_passthru": { 00:22:57.885 "identify_ctrlr": false 00:22:57.885 } 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "nvmf_set_max_subsystems", 00:22:57.885 "params": { 00:22:57.885 "max_subsystems": 1024 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "nvmf_set_crdt", 00:22:57.885 "params": { 00:22:57.885 "crdt1": 0, 00:22:57.885 "crdt2": 0, 00:22:57.885 "crdt3": 0 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "nvmf_create_transport", 00:22:57.885 "params": { 00:22:57.885 "trtype": "TCP", 00:22:57.885 "max_queue_depth": 128, 00:22:57.885 "max_io_qpairs_per_ctrlr": 127, 00:22:57.885 "in_capsule_data_size": 4096, 00:22:57.885 "max_io_size": 131072, 00:22:57.885 "io_unit_size": 131072, 00:22:57.885 "max_aq_depth": 128, 00:22:57.885 "num_shared_buffers": 511, 00:22:57.885 "buf_cache_size": 4294967295, 00:22:57.885 "dif_insert_or_strip": false, 00:22:57.885 "zcopy": false, 00:22:57.885 "c2h_success": false, 00:22:57.885 "sock_priority": 0, 00:22:57.885 "abort_timeout_sec": 1, 00:22:57.885 "ack_timeout": 0, 00:22:57.885 "data_wr_pool_size": 0 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "nvmf_create_subsystem", 00:22:57.885 "params": { 00:22:57.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.885 "allow_any_host": false, 00:22:57.885 "serial_number": "00000000000000000000", 00:22:57.885 "model_number": "SPDK bdev Controller", 00:22:57.885 "max_namespaces": 32, 00:22:57.885 "min_cntlid": 1, 00:22:57.885 "max_cntlid": 65519, 00:22:57.885 "ana_reporting": false 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "nvmf_subsystem_add_host", 00:22:57.885 "params": { 00:22:57.885 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.885 "host": "nqn.2016-06.io.spdk:host1", 00:22:57.885 "psk": "key0" 00:22:57.885 } 00:22:57.885 }, 00:22:57.885 { 00:22:57.885 "method": "nvmf_subsystem_add_ns", 00:22:57.886 "params": { 00:22:57.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.886 "namespace": { 00:22:57.886 "nsid": 1, 00:22:57.886 "bdev_name": "malloc0", 00:22:57.886 "nguid": "01FDBE6B0D9743F89A2CBF4CBB3527E5", 00:22:57.886 "uuid": "01fdbe6b-0d97-43f8-9a2c-bf4cbb3527e5", 00:22:57.886 "no_auto_visible": false 00:22:57.886 } 00:22:57.886 } 00:22:57.886 }, 00:22:57.886 { 00:22:57.886 "method": "nvmf_subsystem_add_listener", 00:22:57.886 "params": { 00:22:57.886 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.886 "listen_address": { 00:22:57.886 "trtype": "TCP", 00:22:57.886 "adrfam": "IPv4", 00:22:57.886 "traddr": "10.0.0.2", 00:22:57.886 "trsvcid": "4420" 00:22:57.886 }, 00:22:57.886 "secure_channel": false, 00:22:57.886 "sock_impl": "ssl" 00:22:57.886 } 00:22:57.886 } 00:22:57.886 ] 00:22:57.886 } 00:22:57.886 ] 00:22:57.886 }' 00:22:57.886 09:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:58.144 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:22:58.144 "subsystems": [ 00:22:58.144 { 00:22:58.144 "subsystem": "keyring", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "keyring_file_add_key", 00:22:58.144 "params": { 00:22:58.144 "name": "key0", 00:22:58.144 "path": "/tmp/tmp.UmdSYQy2jI" 00:22:58.144 } 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "iobuf", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "iobuf_set_options", 00:22:58.144 "params": { 00:22:58.144 "small_pool_count": 8192, 00:22:58.144 "large_pool_count": 1024, 00:22:58.144 "small_bufsize": 8192, 00:22:58.144 "large_bufsize": 135168 00:22:58.144 } 00:22:58.144 } 00:22:58.144 ] 00:22:58.144 }, 00:22:58.144 { 00:22:58.144 "subsystem": "sock", 00:22:58.144 "config": [ 00:22:58.144 { 00:22:58.144 "method": "sock_set_default_impl", 00:22:58.144 "params": { 00:22:58.145 "impl_name": "posix" 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "sock_impl_set_options", 00:22:58.145 "params": { 00:22:58.145 "impl_name": "ssl", 00:22:58.145 "recv_buf_size": 4096, 00:22:58.145 "send_buf_size": 4096, 00:22:58.145 "enable_recv_pipe": true, 00:22:58.145 "enable_quickack": false, 00:22:58.145 "enable_placement_id": 0, 00:22:58.145 "enable_zerocopy_send_server": true, 00:22:58.145 "enable_zerocopy_send_client": false, 00:22:58.145 "zerocopy_threshold": 0, 00:22:58.145 "tls_version": 0, 00:22:58.145 "enable_ktls": false 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "sock_impl_set_options", 00:22:58.145 "params": { 00:22:58.145 "impl_name": "posix", 00:22:58.145 "recv_buf_size": 2097152, 00:22:58.145 "send_buf_size": 2097152, 00:22:58.145 "enable_recv_pipe": true, 00:22:58.145 "enable_quickack": false, 00:22:58.145 "enable_placement_id": 0, 00:22:58.145 "enable_zerocopy_send_server": true, 00:22:58.145 "enable_zerocopy_send_client": false, 00:22:58.145 "zerocopy_threshold": 0, 00:22:58.145 "tls_version": 0, 00:22:58.145 "enable_ktls": false 00:22:58.145 } 00:22:58.145 } 00:22:58.145 ] 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "subsystem": "vmd", 00:22:58.145 "config": [] 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "subsystem": "accel", 00:22:58.145 "config": [ 00:22:58.145 { 00:22:58.145 "method": "accel_set_options", 00:22:58.145 "params": { 00:22:58.145 "small_cache_size": 128, 00:22:58.145 "large_cache_size": 16, 00:22:58.145 "task_count": 2048, 00:22:58.145 "sequence_count": 2048, 00:22:58.145 "buf_count": 2048 00:22:58.145 } 00:22:58.145 } 00:22:58.145 ] 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "subsystem": "bdev", 00:22:58.145 "config": [ 00:22:58.145 { 00:22:58.145 "method": "bdev_set_options", 00:22:58.145 "params": { 00:22:58.145 "bdev_io_pool_size": 65535, 00:22:58.145 "bdev_io_cache_size": 256, 00:22:58.145 "bdev_auto_examine": true, 00:22:58.145 "iobuf_small_cache_size": 128, 00:22:58.145 "iobuf_large_cache_size": 16 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_raid_set_options", 00:22:58.145 "params": { 00:22:58.145 "process_window_size_kb": 1024, 00:22:58.145 "process_max_bandwidth_mb_sec": 0 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_iscsi_set_options", 00:22:58.145 "params": { 00:22:58.145 "timeout_sec": 30 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_nvme_set_options", 00:22:58.145 "params": { 00:22:58.145 "action_on_timeout": "none", 00:22:58.145 "timeout_us": 0, 00:22:58.145 "timeout_admin_us": 0, 00:22:58.145 "keep_alive_timeout_ms": 10000, 00:22:58.145 "arbitration_burst": 0, 00:22:58.145 "low_priority_weight": 0, 00:22:58.145 "medium_priority_weight": 0, 00:22:58.145 "high_priority_weight": 0, 00:22:58.145 "nvme_adminq_poll_period_us": 10000, 00:22:58.145 "nvme_ioq_poll_period_us": 0, 00:22:58.145 "io_queue_requests": 512, 00:22:58.145 "delay_cmd_submit": true, 00:22:58.145 "transport_retry_count": 4, 00:22:58.145 "bdev_retry_count": 3, 00:22:58.145 "transport_ack_timeout": 0, 00:22:58.145 "ctrlr_loss_timeout_sec": 0, 00:22:58.145 "reconnect_delay_sec": 0, 00:22:58.145 "fast_io_fail_timeout_sec": 0, 00:22:58.145 "disable_auto_failback": false, 00:22:58.145 "generate_uuids": false, 00:22:58.145 "transport_tos": 0, 00:22:58.145 "nvme_error_stat": false, 00:22:58.145 "rdma_srq_size": 0, 00:22:58.145 "io_path_stat": false, 00:22:58.145 "allow_accel_sequence": false, 00:22:58.145 "rdma_max_cq_size": 0, 00:22:58.145 "rdma_cm_event_timeout_ms": 0, 00:22:58.145 "dhchap_digests": [ 00:22:58.145 "sha256", 00:22:58.145 "sha384", 00:22:58.145 "sha512" 00:22:58.145 ], 00:22:58.145 "dhchap_dhgroups": [ 00:22:58.145 "null", 00:22:58.145 "ffdhe2048", 00:22:58.145 "ffdhe3072", 00:22:58.145 "ffdhe4096", 00:22:58.145 "ffdhe6144", 00:22:58.145 "ffdhe8192" 00:22:58.145 ] 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_nvme_attach_controller", 00:22:58.145 "params": { 00:22:58.145 "name": "nvme0", 00:22:58.145 "trtype": "TCP", 00:22:58.145 "adrfam": "IPv4", 00:22:58.145 "traddr": "10.0.0.2", 00:22:58.145 "trsvcid": "4420", 00:22:58.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.145 "prchk_reftag": false, 00:22:58.145 "prchk_guard": false, 00:22:58.145 "ctrlr_loss_timeout_sec": 0, 00:22:58.145 "reconnect_delay_sec": 0, 00:22:58.145 "fast_io_fail_timeout_sec": 0, 00:22:58.145 "psk": "key0", 00:22:58.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:58.145 "hdgst": false, 00:22:58.145 "ddgst": false 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_nvme_set_hotplug", 00:22:58.145 "params": { 00:22:58.145 "period_us": 100000, 00:22:58.145 "enable": false 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_enable_histogram", 00:22:58.145 "params": { 00:22:58.145 "name": "nvme0n1", 00:22:58.145 "enable": true 00:22:58.145 } 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "method": "bdev_wait_for_examine" 00:22:58.145 } 00:22:58.145 ] 00:22:58.145 }, 00:22:58.145 { 00:22:58.145 "subsystem": "nbd", 00:22:58.145 "config": [] 00:22:58.145 } 00:22:58.145 ] 00:22:58.145 }' 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3811438 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3811438 ']' 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3811438 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811438 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811438' 00:22:58.145 killing process with pid 3811438 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3811438 00:22:58.145 Received shutdown signal, test time was about 1.000000 seconds 00:22:58.145 00:22:58.145 Latency(us) 00:22:58.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.145 =================================================================================================================== 00:22:58.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.145 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3811438 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3811342 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3811342 ']' 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3811342 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811342 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811342' 00:22:58.404 killing process with pid 3811342 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3811342 00:22:58.404 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3811342 00:22:58.662 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:22:58.662 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.662 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:22:58.662 "subsystems": [ 00:22:58.662 { 00:22:58.662 "subsystem": "keyring", 00:22:58.662 "config": [ 00:22:58.662 { 00:22:58.662 "method": "keyring_file_add_key", 00:22:58.662 "params": { 00:22:58.662 "name": "key0", 00:22:58.662 "path": "/tmp/tmp.UmdSYQy2jI" 00:22:58.662 } 00:22:58.662 } 00:22:58.662 ] 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "subsystem": "iobuf", 00:22:58.662 "config": [ 00:22:58.662 { 00:22:58.662 "method": "iobuf_set_options", 00:22:58.662 "params": { 00:22:58.662 "small_pool_count": 8192, 00:22:58.662 "large_pool_count": 1024, 00:22:58.662 "small_bufsize": 8192, 00:22:58.662 "large_bufsize": 135168 00:22:58.662 } 00:22:58.662 } 00:22:58.662 ] 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "subsystem": "sock", 00:22:58.662 "config": [ 00:22:58.662 { 00:22:58.662 "method": "sock_set_default_impl", 00:22:58.662 "params": { 00:22:58.662 "impl_name": "posix" 00:22:58.662 } 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "method": "sock_impl_set_options", 00:22:58.662 "params": { 00:22:58.662 "impl_name": "ssl", 00:22:58.662 "recv_buf_size": 4096, 00:22:58.662 "send_buf_size": 4096, 00:22:58.662 "enable_recv_pipe": true, 00:22:58.662 "enable_quickack": false, 00:22:58.662 "enable_placement_id": 0, 00:22:58.662 "enable_zerocopy_send_server": true, 00:22:58.662 "enable_zerocopy_send_client": false, 00:22:58.662 "zerocopy_threshold": 0, 00:22:58.662 "tls_version": 0, 00:22:58.662 "enable_ktls": false 00:22:58.662 } 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "method": "sock_impl_set_options", 00:22:58.662 "params": { 00:22:58.662 "impl_name": "posix", 00:22:58.662 "recv_buf_size": 2097152, 00:22:58.662 "send_buf_size": 2097152, 00:22:58.662 "enable_recv_pipe": true, 00:22:58.662 "enable_quickack": false, 00:22:58.662 "enable_placement_id": 0, 00:22:58.662 "enable_zerocopy_send_server": true, 00:22:58.662 "enable_zerocopy_send_client": false, 00:22:58.662 "zerocopy_threshold": 0, 00:22:58.662 "tls_version": 0, 00:22:58.662 "enable_ktls": false 00:22:58.662 } 00:22:58.662 } 00:22:58.662 ] 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "subsystem": "vmd", 00:22:58.662 "config": [] 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "subsystem": "accel", 00:22:58.662 "config": [ 00:22:58.662 { 00:22:58.662 "method": "accel_set_options", 00:22:58.662 "params": { 00:22:58.662 "small_cache_size": 128, 00:22:58.662 "large_cache_size": 16, 00:22:58.662 "task_count": 2048, 00:22:58.662 "sequence_count": 2048, 00:22:58.662 "buf_count": 2048 00:22:58.662 } 00:22:58.662 } 00:22:58.662 ] 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "subsystem": "bdev", 00:22:58.662 "config": [ 00:22:58.662 { 00:22:58.662 "method": "bdev_set_options", 00:22:58.662 "params": { 00:22:58.662 "bdev_io_pool_size": 65535, 00:22:58.662 "bdev_io_cache_size": 256, 00:22:58.662 "bdev_auto_examine": true, 00:22:58.662 "iobuf_small_cache_size": 128, 00:22:58.662 "iobuf_large_cache_size": 16 00:22:58.662 } 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "method": "bdev_raid_set_options", 00:22:58.662 "params": { 00:22:58.662 "process_window_size_kb": 1024, 00:22:58.662 "process_max_bandwidth_mb_sec": 0 00:22:58.662 } 00:22:58.662 }, 00:22:58.662 { 00:22:58.662 "method": "bdev_iscsi_set_options", 00:22:58.662 "params": { 00:22:58.662 "timeout_sec": 30 00:22:58.662 } 00:22:58.662 }, 00:22:58.663 { 00:22:58.663 "method": "bdev_nvme_set_options", 00:22:58.663 "params": { 00:22:58.663 "action_on_timeout": "none", 00:22:58.663 "timeout_us": 0, 00:22:58.663 "timeout_admin_us": 0, 00:22:58.663 "keep_alive_timeout_ms": 10000, 00:22:58.663 "arbitration_burst": 0, 00:22:58.663 "low_priority_weight": 0, 00:22:58.663 "medium_priority_weight": 0, 00:22:58.663 "high_priority_weight": 0, 00:22:58.663 "nvme_adminq_poll_period_us": 10000, 00:22:58.663 "nvme_ioq_poll_period_us": 0, 00:22:58.663 "io_queue_requests": 0, 00:22:58.663 "delay_cmd_submit": true, 00:22:58.663 "transport_retry_count": 4, 00:22:58.663 "bdev_retry_count": 3, 00:22:58.663 "transport_ack_timeout": 0, 00:22:58.663 "ctrlr_loss_timeout_sec": 0, 00:22:58.663 "reconnect_delay_sec": 0, 00:22:58.663 "fast_io_fail_timeout_sec": 0, 00:22:58.663 "disable_auto_failback": false, 00:22:58.663 "generate_uuids": false, 00:22:58.663 "transport_tos": 0, 00:22:58.663 "nvme_error_stat": false, 00:22:58.663 "rdma_srq_size": 0, 00:22:58.663 "io_path_stat": false, 00:22:58.663 "allow_accel_sequence": false, 00:22:58.663 "rdma_max_cq_size": 0, 00:22:58.663 "rdma_cm_event_timeout_ms": 0, 00:22:58.663 "dhchap_digests": [ 00:22:58.663 "sha256", 00:22:58.663 "sha384", 00:22:58.663 "sha512" 00:22:58.663 ], 00:22:58.663 "dhchap_dhgroups": [ 00:22:58.663 "null", 00:22:58.663 "ffdhe2048", 00:22:58.663 "ffdhe3072", 00:22:58.663 "ffdhe4096", 00:22:58.663 "ffdhe6144", 00:22:58.663 "ffdhe8192" 00:22:58.663 ] 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "bdev_nvme_set_hotplug", 00:22:58.663 "params": { 00:22:58.663 "period_us": 100000, 00:22:58.663 "enable": false 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "bdev_malloc_create", 00:22:58.663 "params": { 00:22:58.663 "name": "malloc0", 00:22:58.663 "num_blocks": 8192, 00:22:58.663 "block_size": 4096, 00:22:58.663 "physical_block_size": 4096, 00:22:58.663 "uuid": "01fdbe6b-0d97-43f8-9a2c-bf4cbb3527e5", 00:22:58.663 "optimal_io_boundary": 0, 00:22:58.663 "md_size": 0, 00:22:58.663 "dif_type": 0, 00:22:58.663 "dif_is_head_of_md": false, 00:22:58.663 "dif_pi_format": 0 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "bdev_wait_for_examine" 00:22:58.663 } 00:22:58.663 ] 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "subsystem": "nbd", 00:22:58.663 "config": [] 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "subsystem": "scheduler", 00:22:58.663 "config": [ 00:22:58.663 { 00:22:58.663 "method": "framework_set_scheduler", 00:22:58.663 "params": { 00:22:58.663 "name": "static" 00:22:58.663 } 00:22:58.663 } 00:22:58.663 ] 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "subsystem": "nvmf", 00:22:58.663 "config": [ 00:22:58.663 { 00:22:58.663 "method": "nvmf_set_config", 00:22:58.663 "params": { 00:22:58.663 "discovery_filter": "match_any", 00:22:58.663 "admin_cmd_passthru": { 00:22:58.663 "identify_ctrlr": false 00:22:58.663 } 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_set_max_subsystems", 00:22:58.663 "params": { 00:22:58.663 "max_subsystems": 1024 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_set_crdt", 00:22:58.663 "params": { 00:22:58.663 "crdt1": 0, 00:22:58.663 "crdt2": 0, 00:22:58.663 "crdt3": 0 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_create_transport", 00:22:58.663 "params": { 00:22:58.663 "trtype": "TCP", 00:22:58.663 "max_queue_depth": 128, 00:22:58.663 "max_io_qpairs_per_ctrlr": 127, 00:22:58.663 "in_capsule_data_size": 4096, 00:22:58.663 "max_io_size": 131072, 00:22:58.663 "io_unit_size": 131072, 00:22:58.663 "max_aq_depth": 128, 00:22:58.663 "num_shared_buffers": 511, 00:22:58.663 "buf_cache_size": 4294967295, 00:22:58.663 "dif_insert_or_strip": false, 00:22:58.663 "zcopy": false, 00:22:58.663 "c2h_success": false, 00:22:58.663 "sock_priority": 0, 00:22:58.663 "abort_timeout_sec": 1, 00:22:58.663 "ack_timeout": 0, 00:22:58.663 "data_wr_pool_size": 0 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_create_subsystem", 00:22:58.663 "params": { 00:22:58.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.663 "allow_any_host": false, 00:22:58.663 "serial_number": "00000000000000000000", 00:22:58.663 "model_number": "SPDK bdev Controller", 00:22:58.663 "max_namespaces": 32, 00:22:58.663 "min_cntlid": 1, 00:22:58.663 "max_cntlid": 65519, 00:22:58.663 "ana_reporting": false 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_subsystem_add_host", 00:22:58.663 "params": { 00:22:58.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.663 "host": "nqn.2016-06.io.spdk:host1", 00:22:58.663 "psk": "key0" 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_subsystem_add_ns", 00:22:58.663 "params": { 00:22:58.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.663 "namespace": { 00:22:58.663 "nsid": 1, 00:22:58.663 "bdev_name": "malloc0", 00:22:58.663 "nguid": "01FDBE6B0D9743F89A2CBF4CBB3527E5", 00:22:58.663 "uuid": "01fdbe6b-0d97-43f8-9a2c-bf4cbb3527e5", 00:22:58.663 "no_auto_visible": false 00:22:58.663 } 00:22:58.663 } 00:22:58.663 }, 00:22:58.663 { 00:22:58.663 "method": "nvmf_subsystem_add_listener", 00:22:58.663 "params": { 00:22:58.663 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:58.663 "listen_address": { 00:22:58.663 "trtype": "TCP", 00:22:58.663 "adrfam": "IPv4", 00:22:58.663 "traddr": "10.0.0.2", 00:22:58.663 "trsvcid": "4420" 00:22:58.663 }, 00:22:58.663 "secure_channel": false, 00:22:58.663 "sock_impl": "ssl" 00:22:58.663 } 00:22:58.663 } 00:22:58.663 ] 00:22:58.663 } 00:22:58.663 ] 00:22:58.663 }' 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3811778 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3811778 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3811778 ']' 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.663 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.664 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.664 09:08:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.664 [2024-07-24 09:08:36.610999] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:58.664 [2024-07-24 09:08:36.611095] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.664 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.664 [2024-07-24 09:08:36.646923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:58.664 [2024-07-24 09:08:36.678314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.664 [2024-07-24 09:08:36.770099] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.664 [2024-07-24 09:08:36.770182] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.664 [2024-07-24 09:08:36.770205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.664 [2024-07-24 09:08:36.770216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.664 [2024-07-24 09:08:36.770226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.664 [2024-07-24 09:08:36.770297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.922 [2024-07-24 09:08:37.009829] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:59.179 [2024-07-24 09:08:37.051947] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:59.179 [2024-07-24 09:08:37.052200] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3811928 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3811928 /var/tmp/bdevperf.sock 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3811928 ']' 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.745 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:22:59.745 "subsystems": [ 00:22:59.745 { 00:22:59.745 "subsystem": "keyring", 00:22:59.745 "config": [ 00:22:59.745 { 00:22:59.745 "method": "keyring_file_add_key", 00:22:59.745 "params": { 00:22:59.745 "name": "key0", 00:22:59.745 "path": "/tmp/tmp.UmdSYQy2jI" 00:22:59.745 } 00:22:59.745 } 00:22:59.745 ] 00:22:59.745 }, 00:22:59.745 { 00:22:59.745 "subsystem": "iobuf", 00:22:59.745 "config": [ 00:22:59.745 { 00:22:59.745 "method": "iobuf_set_options", 00:22:59.745 "params": { 00:22:59.746 "small_pool_count": 8192, 00:22:59.746 "large_pool_count": 1024, 00:22:59.746 "small_bufsize": 8192, 00:22:59.746 "large_bufsize": 135168 00:22:59.746 } 00:22:59.746 } 00:22:59.746 ] 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "subsystem": "sock", 00:22:59.746 "config": [ 00:22:59.746 { 00:22:59.746 "method": "sock_set_default_impl", 00:22:59.746 "params": { 00:22:59.746 "impl_name": "posix" 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "sock_impl_set_options", 00:22:59.746 "params": { 00:22:59.746 "impl_name": "ssl", 00:22:59.746 "recv_buf_size": 4096, 00:22:59.746 "send_buf_size": 4096, 00:22:59.746 "enable_recv_pipe": true, 00:22:59.746 "enable_quickack": false, 00:22:59.746 "enable_placement_id": 0, 00:22:59.746 "enable_zerocopy_send_server": true, 00:22:59.746 "enable_zerocopy_send_client": false, 00:22:59.746 "zerocopy_threshold": 0, 00:22:59.746 "tls_version": 0, 00:22:59.746 "enable_ktls": false 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "sock_impl_set_options", 00:22:59.746 "params": { 00:22:59.746 "impl_name": "posix", 00:22:59.746 "recv_buf_size": 2097152, 00:22:59.746 "send_buf_size": 2097152, 00:22:59.746 "enable_recv_pipe": true, 00:22:59.746 "enable_quickack": false, 00:22:59.746 "enable_placement_id": 0, 00:22:59.746 "enable_zerocopy_send_server": true, 00:22:59.746 "enable_zerocopy_send_client": false, 00:22:59.746 "zerocopy_threshold": 0, 00:22:59.746 "tls_version": 0, 00:22:59.746 "enable_ktls": false 00:22:59.746 } 00:22:59.746 } 00:22:59.746 ] 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "subsystem": "vmd", 00:22:59.746 "config": [] 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "subsystem": "accel", 00:22:59.746 "config": [ 00:22:59.746 { 00:22:59.746 "method": "accel_set_options", 00:22:59.746 "params": { 00:22:59.746 "small_cache_size": 128, 00:22:59.746 "large_cache_size": 16, 00:22:59.746 "task_count": 2048, 00:22:59.746 "sequence_count": 2048, 00:22:59.746 "buf_count": 2048 00:22:59.746 } 00:22:59.746 } 00:22:59.746 ] 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "subsystem": "bdev", 00:22:59.746 "config": [ 00:22:59.746 { 00:22:59.746 "method": "bdev_set_options", 00:22:59.746 "params": { 00:22:59.746 "bdev_io_pool_size": 65535, 00:22:59.746 "bdev_io_cache_size": 256, 00:22:59.746 "bdev_auto_examine": true, 00:22:59.746 "iobuf_small_cache_size": 128, 00:22:59.746 "iobuf_large_cache_size": 16 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_raid_set_options", 00:22:59.746 "params": { 00:22:59.746 "process_window_size_kb": 1024, 00:22:59.746 "process_max_bandwidth_mb_sec": 0 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_iscsi_set_options", 00:22:59.746 "params": { 00:22:59.746 "timeout_sec": 30 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_nvme_set_options", 00:22:59.746 "params": { 00:22:59.746 "action_on_timeout": "none", 00:22:59.746 "timeout_us": 0, 00:22:59.746 "timeout_admin_us": 0, 00:22:59.746 "keep_alive_timeout_ms": 10000, 00:22:59.746 "arbitration_burst": 0, 00:22:59.746 "low_priority_weight": 0, 00:22:59.746 "medium_priority_weight": 0, 00:22:59.746 "high_priority_weight": 0, 00:22:59.746 "nvme_adminq_poll_period_us": 10000, 00:22:59.746 "nvme_ioq_poll_period_us": 0, 00:22:59.746 "io_queue_requests": 512, 00:22:59.746 "delay_cmd_submit": true, 00:22:59.746 "transport_retry_count": 4, 00:22:59.746 "bdev_retry_count": 3, 00:22:59.746 "transport_ack_timeout": 0, 00:22:59.746 "ctrlr_loss_timeout_sec": 0, 00:22:59.746 "reconnect_delay_sec": 0, 00:22:59.746 "fast_io_fail_timeout_sec": 0, 00:22:59.746 "disable_auto_failback": false, 00:22:59.746 "generate_uuids": false, 00:22:59.746 "transport_tos": 0, 00:22:59.746 "nvme_error_stat": false, 00:22:59.746 "rdma_srq_size": 0, 00:22:59.746 "io_path_stat": false, 00:22:59.746 "allow_accel_sequence": false, 00:22:59.746 "rdma_max_cq_size": 0, 00:22:59.746 "rdma_cm_event_timeout_ms": 0, 00:22:59.746 "dhchap_digests": [ 00:22:59.746 "sha256", 00:22:59.746 "sha384", 00:22:59.746 "sha512" 00:22:59.746 ], 00:22:59.746 "dhchap_dhgroups": [ 00:22:59.746 "null", 00:22:59.746 "ffdhe2048", 00:22:59.746 "ffdhe3072", 00:22:59.746 "ffdhe4096", 00:22:59.746 "ffdhe6144", 00:22:59.746 "ffdhe8192" 00:22:59.746 ] 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_nvme_attach_controller", 00:22:59.746 "params": { 00:22:59.746 "name": "nvme0", 00:22:59.746 "trtype": "TCP", 00:22:59.746 "adrfam": "IPv4", 00:22:59.746 "traddr": "10.0.0.2", 00:22:59.746 "trsvcid": "4420", 00:22:59.746 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.746 "prchk_reftag": false, 00:22:59.746 "prchk_guard": false, 00:22:59.746 "ctrlr_loss_timeout_sec": 0, 00:22:59.746 "reconnect_delay_sec": 0, 00:22:59.746 "fast_io_fail_timeout_sec": 0, 00:22:59.746 "psk": "key0", 00:22:59.746 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.746 "hdgst": false, 00:22:59.746 "ddgst": false 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_nvme_set_hotplug", 00:22:59.746 "params": { 00:22:59.746 "period_us": 100000, 00:22:59.746 "enable": false 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_enable_histogram", 00:22:59.746 "params": { 00:22:59.746 "name": "nvme0n1", 00:22:59.746 "enable": true 00:22:59.746 } 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "method": "bdev_wait_for_examine" 00:22:59.746 } 00:22:59.746 ] 00:22:59.746 }, 00:22:59.746 { 00:22:59.746 "subsystem": "nbd", 00:22:59.746 "config": [] 00:22:59.746 } 00:22:59.746 ] 00:22:59.746 }' 00:22:59.747 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.747 09:08:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.747 [2024-07-24 09:08:37.714750] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:22:59.747 [2024-07-24 09:08:37.714839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3811928 ] 00:22:59.747 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.747 [2024-07-24 09:08:37.745979] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:22:59.747 [2024-07-24 09:08:37.776908] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.005 [2024-07-24 09:08:37.867746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.005 [2024-07-24 09:08:38.046173] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.570 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.570 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:00.570 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.570 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:23:00.828 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.828 09:08:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:01.085 Running I/O for 1 seconds... 00:23:02.017 00:23:02.017 Latency(us) 00:23:02.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.017 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:02.017 Verification LBA range: start 0x0 length 0x2000 00:23:02.017 nvme0n1 : 1.06 1460.40 5.70 0.00 0.00 85902.91 6602.15 57865.86 00:23:02.017 =================================================================================================================== 00:23:02.017 Total : 1460.40 5.70 0.00 0.00 85902.91 6602.15 57865.86 00:23:02.017 0 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:02.017 nvmf_trace.0 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:02.017 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3811928 00:23:02.018 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3811928 ']' 00:23:02.018 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3811928 00:23:02.018 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811928 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811928' 00:23:02.275 killing process with pid 3811928 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3811928 00:23:02.275 Received shutdown signal, test time was about 1.000000 seconds 00:23:02.275 00:23:02.275 Latency(us) 00:23:02.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.275 =================================================================================================================== 00:23:02.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3811928 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:02.275 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.276 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:02.276 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.276 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:02.276 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.276 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.276 rmmod nvme_tcp 00:23:02.534 rmmod nvme_fabrics 00:23:02.534 rmmod nvme_keyring 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3811778 ']' 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3811778 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3811778 ']' 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3811778 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3811778 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3811778' 00:23:02.534 killing process with pid 3811778 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3811778 00:23:02.534 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3811778 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:02.793 09:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GZad1oWyqs /tmp/tmp.mEFOWKcEUr /tmp/tmp.UmdSYQy2jI 00:23:04.728 00:23:04.728 real 1m19.198s 00:23:04.728 user 2m1.435s 00:23:04.728 sys 0m27.260s 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.728 ************************************ 00:23:04.728 END TEST nvmf_tls 00:23:04.728 ************************************ 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:04.728 ************************************ 00:23:04.728 START TEST nvmf_fips 00:23:04.728 ************************************ 00:23:04.728 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:04.988 * Looking for test storage... 00:23:04.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:04.988 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:04.989 Error setting digest 00:23:04.989 0012BC57397F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:04.989 0012BC57397F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:04.989 09:08:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.892 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:06.893 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:06.893 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:06.893 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:07.152 Found net devices under 0000:09:00.0: cvl_0_0 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:07.152 Found net devices under 0000:09:00.1: cvl_0_1 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:07.152 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:07.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:07.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:23:07.153 00:23:07.153 --- 10.0.0.2 ping statistics --- 00:23:07.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.153 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:07.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:07.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:23:07.153 00:23:07.153 --- 10.0.0.1 ping statistics --- 00:23:07.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:07.153 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3814282 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3814282 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3814282 ']' 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.153 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.153 [2024-07-24 09:08:45.245618] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:23:07.153 [2024-07-24 09:08:45.245721] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:07.411 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.411 [2024-07-24 09:08:45.284987] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:07.411 [2024-07-24 09:08:45.315196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.411 [2024-07-24 09:08:45.411443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:07.411 [2024-07-24 09:08:45.411505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:07.411 [2024-07-24 09:08:45.411521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:07.411 [2024-07-24 09:08:45.411534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:07.411 [2024-07-24 09:08:45.411545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:07.411 [2024-07-24 09:08:45.411584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:07.669 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:07.669 [2024-07-24 09:08:45.784479] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.930 [2024-07-24 09:08:45.800487] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.930 [2024-07-24 09:08:45.800707] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.930 [2024-07-24 09:08:45.832215] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.930 malloc0 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3814315 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3814315 /var/tmp/bdevperf.sock 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3814315 ']' 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:07.930 09:08:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:07.930 [2024-07-24 09:08:45.919970] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:23:07.930 [2024-07-24 09:08:45.920050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3814315 ] 00:23:07.930 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.930 [2024-07-24 09:08:45.951654] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:23:07.930 [2024-07-24 09:08:45.977966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.187 [2024-07-24 09:08:46.071639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.187 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:08.187 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:08.187 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:08.445 [2024-07-24 09:08:46.399211] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.445 [2024-07-24 09:08:46.399345] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:08.445 TLSTESTn1 00:23:08.445 09:08:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.702 Running I/O for 10 seconds... 00:23:18.670 00:23:18.670 Latency(us) 00:23:18.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.670 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.670 Verification LBA range: start 0x0 length 0x2000 00:23:18.670 TLSTESTn1 : 10.03 3204.64 12.52 0.00 0.00 39849.76 8543.95 66021.45 00:23:18.670 =================================================================================================================== 00:23:18.670 Total : 3204.64 12.52 0.00 0.00 39849.76 8543.95 66021.45 00:23:18.670 0 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:18.670 nvmf_trace.0 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3814315 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3814315 ']' 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3814315 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3814315 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3814315' 00:23:18.670 killing process with pid 3814315 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3814315 00:23:18.670 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.670 00:23:18.670 Latency(us) 00:23:18.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.670 =================================================================================================================== 00:23:18.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.670 [2024-07-24 09:08:56.769478] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.670 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3814315 00:23:18.928 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:18.928 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.928 09:08:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:18.928 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.928 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:18.928 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.928 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.928 rmmod nvme_tcp 00:23:18.928 rmmod nvme_fabrics 00:23:18.928 rmmod nvme_keyring 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3814282 ']' 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3814282 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3814282 ']' 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3814282 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3814282 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3814282' 00:23:19.186 killing process with pid 3814282 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3814282 00:23:19.186 [2024-07-24 09:08:57.083166] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:19.186 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3814282 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.444 09:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:21.346 00:23:21.346 real 0m16.575s 00:23:21.346 user 0m20.971s 00:23:21.346 sys 0m5.888s 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 ************************************ 00:23:21.346 END TEST nvmf_fips 00:23:21.346 ************************************ 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 1 -eq 1 ']' 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@46 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:21.346 ************************************ 00:23:21.346 START TEST nvmf_fuzz 00:23:21.346 ************************************ 00:23:21.346 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:21.606 * Looking for test storage... 00:23:21.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.606 09:08:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:23.510 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:23.510 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:23.510 Found net devices under 0000:09:00.0: cvl_0_0 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:23.510 Found net devices under 0000:09:00.1: cvl_0_1 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:23.510 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:23.510 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:23.510 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:23:23.510 00:23:23.510 --- 10.0.0.2 ping statistics --- 00:23:23.510 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.511 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:23.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:23.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:23:23.511 00:23:23.511 --- 10.0.0.1 ping statistics --- 00:23:23.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:23.511 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3817643 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3817643 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3817643 ']' 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.511 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:23.769 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:23.769 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:23.769 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:23.769 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:23.769 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.027 Malloc0 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:24.027 09:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:23:56.090 Fuzzing completed. Shutting down the fuzz application 00:23:56.090 00:23:56.090 Dumping successful admin opcodes: 00:23:56.090 8, 9, 10, 24, 00:23:56.090 Dumping successful io opcodes: 00:23:56.090 0, 9, 00:23:56.090 NS: 0x200003aeff00 I/O qp, Total commands completed: 459829, total successful commands: 2663, random_seed: 1364438400 00:23:56.090 NS: 0x200003aeff00 admin qp, Total commands completed: 56400, total successful commands: 447, random_seed: 1591339200 00:23:56.090 09:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:23:56.090 Fuzzing completed. Shutting down the fuzz application 00:23:56.090 00:23:56.090 Dumping successful admin opcodes: 00:23:56.090 24, 00:23:56.090 Dumping successful io opcodes: 00:23:56.090 00:23:56.090 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 455633320 00:23:56.090 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 455753455 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.090 rmmod nvme_tcp 00:23:56.090 rmmod nvme_fabrics 00:23:56.090 rmmod nvme_keyring 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 3817643 ']' 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 3817643 00:23:56.090 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3817643 ']' 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 3817643 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3817643 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3817643' 00:23:56.091 killing process with pid 3817643 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 3817643 00:23:56.091 09:09:33 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 3817643 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.091 09:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:23:58.625 00:23:58.625 real 0m36.747s 00:23:58.625 user 0m51.019s 00:23:58.625 sys 0m14.686s 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 ************************************ 00:23:58.625 END TEST nvmf_fuzz 00:23:58.625 ************************************ 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:58.625 ************************************ 00:23:58.625 START TEST nvmf_multiconnection 00:23:58.625 ************************************ 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:23:58.625 * Looking for test storage... 00:23:58.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:23:58.625 09:09:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:00.536 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:00.536 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:00.536 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:00.537 Found net devices under 0000:09:00.0: cvl_0_0 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:00.537 Found net devices under 0000:09:00.1: cvl_0_1 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:24:00.537 00:24:00.537 --- 10.0.0.2 ping statistics --- 00:24:00.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.537 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:24:00.537 00:24:00.537 --- 10.0.0.1 ping statistics --- 00:24:00.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.537 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=3823777 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 3823777 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 3823777 ']' 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.537 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.537 [2024-07-24 09:09:38.446851] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:24:00.537 [2024-07-24 09:09:38.446922] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.537 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.537 [2024-07-24 09:09:38.488957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:24:00.537 [2024-07-24 09:09:38.515699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.537 [2024-07-24 09:09:38.607808] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.537 [2024-07-24 09:09:38.607860] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.537 [2024-07-24 09:09:38.607887] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.537 [2024-07-24 09:09:38.607898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.537 [2024-07-24 09:09:38.607908] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.537 [2024-07-24 09:09:38.608039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.537 [2024-07-24 09:09:38.608078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.537 [2024-07-24 09:09:38.608172] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.537 [2024-07-24 09:09:38.608176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 [2024-07-24 09:09:38.764621] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 Malloc1 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 [2024-07-24 09:09:38.822502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 Malloc2 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 Malloc3 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.796 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 Malloc4 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 Malloc5 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 Malloc6 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 Malloc7 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 Malloc8 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.055 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 Malloc9 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 Malloc10 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 Malloc11 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:01.315 09:09:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:02.248 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:02.248 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:02.248 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:02.248 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:02.248 09:09:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK1 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:04.144 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:04.710 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:04.710 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:04.710 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:04.710 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:04.710 09:09:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK2 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:07.235 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:07.236 09:09:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:07.494 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:07.494 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:07.494 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:07.494 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:07.494 09:09:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK3 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:10.021 09:09:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:10.278 09:09:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:10.278 09:09:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:10.278 09:09:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:10.278 09:09:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:10.278 09:09:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK4 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:12.177 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:12.178 09:09:50 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:13.111 09:09:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:13.111 09:09:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:13.111 09:09:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:13.111 09:09:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:13.111 09:09:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK5 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:15.009 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:15.941 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:15.941 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:15.941 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:15.941 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:15.941 09:09:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK6 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:17.838 09:09:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:18.771 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:18.771 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:18.771 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:18.771 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:18.771 09:09:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK7 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:20.697 09:09:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:21.263 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:21.263 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:21.263 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:21.263 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:21.263 09:09:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK8 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.790 09:10:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:24.354 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:24.354 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:24.354 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:24.354 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:24.354 09:10:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK9 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.260 09:10:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:27.193 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:27.193 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:27.193 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:27.193 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:27.193 09:10:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK10 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:29.088 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:29.089 09:10:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:30.021 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:30.021 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1196 -- # local i=0 00:24:30.021 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:24:30.021 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:24:30.021 09:10:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # sleep 2 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # grep -c SPDK11 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # return 0 00:24:32.550 09:10:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:32.550 [global] 00:24:32.550 thread=1 00:24:32.550 invalidate=1 00:24:32.550 rw=read 00:24:32.550 time_based=1 00:24:32.550 runtime=10 00:24:32.550 ioengine=libaio 00:24:32.550 direct=1 00:24:32.550 bs=262144 00:24:32.550 iodepth=64 00:24:32.550 norandommap=1 00:24:32.550 numjobs=1 00:24:32.550 00:24:32.550 [job0] 00:24:32.550 filename=/dev/nvme0n1 00:24:32.550 [job1] 00:24:32.550 filename=/dev/nvme10n1 00:24:32.550 [job2] 00:24:32.550 filename=/dev/nvme1n1 00:24:32.550 [job3] 00:24:32.550 filename=/dev/nvme2n1 00:24:32.550 [job4] 00:24:32.550 filename=/dev/nvme3n1 00:24:32.550 [job5] 00:24:32.550 filename=/dev/nvme4n1 00:24:32.550 [job6] 00:24:32.550 filename=/dev/nvme5n1 00:24:32.550 [job7] 00:24:32.550 filename=/dev/nvme6n1 00:24:32.550 [job8] 00:24:32.550 filename=/dev/nvme7n1 00:24:32.550 [job9] 00:24:32.550 filename=/dev/nvme8n1 00:24:32.550 [job10] 00:24:32.550 filename=/dev/nvme9n1 00:24:32.550 Could not set queue depth (nvme0n1) 00:24:32.550 Could not set queue depth (nvme10n1) 00:24:32.550 Could not set queue depth (nvme1n1) 00:24:32.550 Could not set queue depth (nvme2n1) 00:24:32.550 Could not set queue depth (nvme3n1) 00:24:32.550 Could not set queue depth (nvme4n1) 00:24:32.550 Could not set queue depth (nvme5n1) 00:24:32.550 Could not set queue depth (nvme6n1) 00:24:32.550 Could not set queue depth (nvme7n1) 00:24:32.550 Could not set queue depth (nvme8n1) 00:24:32.550 Could not set queue depth (nvme9n1) 00:24:32.550 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:32.550 fio-3.35 00:24:32.550 Starting 11 threads 00:24:44.755 00:24:44.755 job0: (groupid=0, jobs=1): err= 0: pid=3828018: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=517, BW=129MiB/s (136MB/s)(1302MiB/10069msec) 00:24:44.755 slat (usec): min=9, max=102488, avg=1622.98, stdev=5592.90 00:24:44.755 clat (msec): min=7, max=344, avg=122.03, stdev=51.13 00:24:44.755 lat (msec): min=7, max=344, avg=123.65, stdev=51.93 00:24:44.755 clat percentiles (msec): 00:24:44.755 | 1.00th=[ 23], 5.00th=[ 57], 10.00th=[ 65], 20.00th=[ 83], 00:24:44.755 | 30.00th=[ 93], 40.00th=[ 104], 50.00th=[ 115], 60.00th=[ 127], 00:24:44.755 | 70.00th=[ 140], 80.00th=[ 155], 90.00th=[ 203], 95.00th=[ 228], 00:24:44.755 | 99.00th=[ 279], 99.50th=[ 288], 99.90th=[ 317], 99.95th=[ 317], 00:24:44.755 | 99.99th=[ 347] 00:24:44.755 bw ( KiB/s): min=62464, max=207872, per=7.47%, avg=131660.80, stdev=39141.89, samples=20 00:24:44.755 iops : min= 244, max= 812, avg=514.30, stdev=152.90, samples=20 00:24:44.755 lat (msec) : 10=0.17%, 20=0.69%, 50=2.09%, 100=33.40%, 250=61.37% 00:24:44.755 lat (msec) : 500=2.27% 00:24:44.755 cpu : usr=0.28%, sys=1.76%, ctx=1101, majf=0, minf=4097 00:24:44.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:44.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.755 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.755 job1: (groupid=0, jobs=1): err= 0: pid=3828027: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=911, BW=228MiB/s (239MB/s)(2302MiB/10106msec) 00:24:44.755 slat (usec): min=13, max=60888, avg=1011.32, stdev=3102.11 00:24:44.755 clat (msec): min=6, max=213, avg=69.16, stdev=37.10 00:24:44.755 lat (msec): min=6, max=213, avg=70.17, stdev=37.63 00:24:44.755 clat percentiles (msec): 00:24:44.755 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 34], 00:24:44.755 | 30.00th=[ 37], 40.00th=[ 50], 50.00th=[ 63], 60.00th=[ 74], 00:24:44.755 | 70.00th=[ 89], 80.00th=[ 105], 90.00th=[ 124], 95.00th=[ 138], 00:24:44.755 | 99.00th=[ 161], 99.50th=[ 180], 99.90th=[ 197], 99.95th=[ 213], 00:24:44.755 | 99.99th=[ 213] 00:24:44.755 bw ( KiB/s): min=113664, max=431616, per=13.29%, avg=234112.00, stdev=105250.15, samples=20 00:24:44.755 iops : min= 444, max= 1686, avg=914.50, stdev=411.13, samples=20 00:24:44.755 lat (msec) : 10=0.08%, 20=1.78%, 50=38.59%, 100=36.90%, 250=22.65% 00:24:44.755 cpu : usr=0.50%, sys=2.85%, ctx=1754, majf=0, minf=4097 00:24:44.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:24:44.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.755 issued rwts: total=9208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.755 job2: (groupid=0, jobs=1): err= 0: pid=3828028: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=755, BW=189MiB/s (198MB/s)(1909MiB/10104msec) 00:24:44.755 slat (usec): min=9, max=139074, avg=1001.72, stdev=3889.56 00:24:44.755 clat (msec): min=4, max=317, avg=83.62, stdev=46.87 00:24:44.755 lat (msec): min=4, max=321, avg=84.63, stdev=47.33 00:24:44.755 clat percentiles (msec): 00:24:44.755 | 1.00th=[ 12], 5.00th=[ 24], 10.00th=[ 34], 20.00th=[ 48], 00:24:44.755 | 30.00th=[ 57], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 82], 00:24:44.755 | 70.00th=[ 101], 80.00th=[ 123], 90.00th=[ 142], 95.00th=[ 176], 00:24:44.755 | 99.00th=[ 230], 99.50th=[ 259], 99.90th=[ 313], 99.95th=[ 317], 00:24:44.755 | 99.99th=[ 317] 00:24:44.755 bw ( KiB/s): min=90112, max=401920, per=11.00%, avg=193834.20, stdev=80234.84, samples=20 00:24:44.755 iops : min= 352, max= 1570, avg=757.15, stdev=313.42, samples=20 00:24:44.755 lat (msec) : 10=0.59%, 20=3.46%, 50=18.10%, 100=47.62%, 250=29.57% 00:24:44.755 lat (msec) : 500=0.67% 00:24:44.755 cpu : usr=0.41%, sys=2.31%, ctx=1404, majf=0, minf=4097 00:24:44.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:24:44.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.755 issued rwts: total=7634,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.755 job3: (groupid=0, jobs=1): err= 0: pid=3828029: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=621, BW=155MiB/s (163MB/s)(1566MiB/10071msec) 00:24:44.755 slat (usec): min=9, max=50822, avg=1361.27, stdev=4086.68 00:24:44.755 clat (msec): min=13, max=285, avg=101.45, stdev=44.83 00:24:44.755 lat (msec): min=13, max=285, avg=102.82, stdev=45.40 00:24:44.755 clat percentiles (msec): 00:24:44.755 | 1.00th=[ 34], 5.00th=[ 54], 10.00th=[ 60], 20.00th=[ 67], 00:24:44.755 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 88], 60.00th=[ 100], 00:24:44.755 | 70.00th=[ 115], 80.00th=[ 138], 90.00th=[ 159], 95.00th=[ 197], 00:24:44.755 | 99.00th=[ 249], 99.50th=[ 262], 99.90th=[ 275], 99.95th=[ 275], 00:24:44.755 | 99.99th=[ 288] 00:24:44.755 bw ( KiB/s): min=79872, max=261632, per=9.01%, avg=158694.40, stdev=57426.34, samples=20 00:24:44.755 iops : min= 312, max= 1022, avg=619.90, stdev=224.32, samples=20 00:24:44.755 lat (msec) : 20=0.26%, 50=3.26%, 100=56.90%, 250=38.65%, 500=0.94% 00:24:44.755 cpu : usr=0.28%, sys=2.21%, ctx=1240, majf=0, minf=4097 00:24:44.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.755 issued rwts: total=6262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.755 job4: (groupid=0, jobs=1): err= 0: pid=3828032: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=548, BW=137MiB/s (144MB/s)(1382MiB/10072msec) 00:24:44.755 slat (usec): min=9, max=97177, avg=1347.91, stdev=4833.81 00:24:44.755 clat (msec): min=7, max=274, avg=115.12, stdev=39.20 00:24:44.755 lat (msec): min=8, max=314, avg=116.47, stdev=39.80 00:24:44.755 clat percentiles (msec): 00:24:44.755 | 1.00th=[ 22], 5.00th=[ 64], 10.00th=[ 74], 20.00th=[ 84], 00:24:44.755 | 30.00th=[ 91], 40.00th=[ 101], 50.00th=[ 112], 60.00th=[ 122], 00:24:44.755 | 70.00th=[ 133], 80.00th=[ 146], 90.00th=[ 161], 95.00th=[ 190], 00:24:44.755 | 99.00th=[ 228], 99.50th=[ 236], 99.90th=[ 247], 99.95th=[ 251], 00:24:44.755 | 99.99th=[ 275] 00:24:44.755 bw ( KiB/s): min=77824, max=203776, per=7.94%, avg=139929.60, stdev=32509.34, samples=20 00:24:44.755 iops : min= 304, max= 796, avg=546.60, stdev=126.99, samples=20 00:24:44.755 lat (msec) : 10=0.09%, 20=0.76%, 50=2.13%, 100=37.08%, 250=59.87% 00:24:44.755 lat (msec) : 500=0.07% 00:24:44.755 cpu : usr=0.23%, sys=1.90%, ctx=1271, majf=0, minf=4097 00:24:44.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:24:44.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.755 issued rwts: total=5529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.755 job5: (groupid=0, jobs=1): err= 0: pid=3828033: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=595, BW=149MiB/s (156MB/s)(1506MiB/10109msec) 00:24:44.755 slat (usec): min=10, max=188283, avg=975.22, stdev=5086.91 00:24:44.755 clat (usec): min=805, max=390482, avg=106343.11, stdev=56382.35 00:24:44.755 lat (usec): min=855, max=390512, avg=107318.33, stdev=57063.84 00:24:44.755 clat percentiles (usec): 00:24:44.755 | 1.00th=[ 1401], 5.00th=[ 18220], 10.00th=[ 25822], 20.00th=[ 58459], 00:24:44.755 | 30.00th=[ 89654], 40.00th=[ 99091], 50.00th=[107480], 60.00th=[116917], 00:24:44.755 | 70.00th=[127402], 80.00th=[139461], 90.00th=[158335], 95.00th=[208667], 00:24:44.755 | 99.00th=[244319], 99.50th=[383779], 99.90th=[383779], 99.95th=[387974], 00:24:44.755 | 99.99th=[392168] 00:24:44.755 bw ( KiB/s): min=67584, max=289792, per=8.66%, avg=152550.40, stdev=54001.46, samples=20 00:24:44.755 iops : min= 264, max= 1132, avg=595.90, stdev=210.94, samples=20 00:24:44.755 lat (usec) : 1000=0.46% 00:24:44.755 lat (msec) : 2=1.23%, 4=0.43%, 10=0.45%, 20=3.54%, 50=11.27% 00:24:44.755 lat (msec) : 100=23.76%, 250=57.91%, 500=0.95% 00:24:44.755 cpu : usr=0.31%, sys=1.67%, ctx=1452, majf=0, minf=3721 00:24:44.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.755 issued rwts: total=6023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.755 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.755 job6: (groupid=0, jobs=1): err= 0: pid=3828034: Wed Jul 24 09:10:20 2024 00:24:44.755 read: IOPS=506, BW=127MiB/s (133MB/s)(1276MiB/10071msec) 00:24:44.755 slat (usec): min=10, max=108240, avg=1715.21, stdev=5918.53 00:24:44.755 clat (msec): min=2, max=345, avg=124.46, stdev=57.52 00:24:44.755 lat (msec): min=2, max=345, avg=126.17, stdev=58.28 00:24:44.755 clat percentiles (msec): 00:24:44.755 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 44], 20.00th=[ 84], 00:24:44.755 | 30.00th=[ 101], 40.00th=[ 113], 50.00th=[ 124], 60.00th=[ 134], 00:24:44.755 | 70.00th=[ 146], 80.00th=[ 161], 90.00th=[ 211], 95.00th=[ 232], 00:24:44.755 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 313], 99.95th=[ 342], 00:24:44.755 | 99.99th=[ 347] 00:24:44.755 bw ( KiB/s): min=76288, max=240640, per=7.32%, avg=129024.00, stdev=41449.20, samples=20 00:24:44.755 iops : min= 298, max= 940, avg=504.00, stdev=161.91, samples=20 00:24:44.755 lat (msec) : 4=0.39%, 10=2.18%, 20=2.14%, 50=7.05%, 100=18.24% 00:24:44.755 lat (msec) : 250=67.90%, 500=2.10% 00:24:44.755 cpu : usr=0.31%, sys=1.79%, ctx=1098, majf=0, minf=4097 00:24:44.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:44.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.756 issued rwts: total=5103,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.756 job7: (groupid=0, jobs=1): err= 0: pid=3828036: Wed Jul 24 09:10:20 2024 00:24:44.756 read: IOPS=513, BW=128MiB/s (135MB/s)(1298MiB/10104msec) 00:24:44.756 slat (usec): min=9, max=125209, avg=1300.78, stdev=5701.42 00:24:44.756 clat (msec): min=5, max=350, avg=123.12, stdev=55.10 00:24:44.756 lat (msec): min=5, max=350, avg=124.42, stdev=56.02 00:24:44.756 clat percentiles (msec): 00:24:44.756 | 1.00th=[ 20], 5.00th=[ 45], 10.00th=[ 64], 20.00th=[ 82], 00:24:44.756 | 30.00th=[ 94], 40.00th=[ 104], 50.00th=[ 113], 60.00th=[ 124], 00:24:44.756 | 70.00th=[ 138], 80.00th=[ 163], 90.00th=[ 213], 95.00th=[ 232], 00:24:44.756 | 99.00th=[ 266], 99.50th=[ 279], 99.90th=[ 317], 99.95th=[ 321], 00:24:44.756 | 99.99th=[ 351] 00:24:44.756 bw ( KiB/s): min=64000, max=202752, per=7.45%, avg=131302.40, stdev=37709.32, samples=20 00:24:44.756 iops : min= 250, max= 792, avg=512.90, stdev=147.30, samples=20 00:24:44.756 lat (msec) : 10=0.15%, 20=0.98%, 50=4.85%, 100=30.22%, 250=61.42% 00:24:44.756 lat (msec) : 500=2.37% 00:24:44.756 cpu : usr=0.22%, sys=1.63%, ctx=1240, majf=0, minf=4097 00:24:44.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:44.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.756 issued rwts: total=5192,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.756 job8: (groupid=0, jobs=1): err= 0: pid=3828037: Wed Jul 24 09:10:20 2024 00:24:44.756 read: IOPS=681, BW=170MiB/s (179MB/s)(1713MiB/10047msec) 00:24:44.756 slat (usec): min=13, max=106027, avg=1448.50, stdev=4997.23 00:24:44.756 clat (msec): min=15, max=303, avg=92.30, stdev=62.18 00:24:44.756 lat (msec): min=15, max=350, avg=93.75, stdev=63.22 00:24:44.756 clat percentiles (msec): 00:24:44.756 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 35], 20.00th=[ 39], 00:24:44.756 | 30.00th=[ 42], 40.00th=[ 50], 50.00th=[ 72], 60.00th=[ 94], 00:24:44.756 | 70.00th=[ 121], 80.00th=[ 146], 90.00th=[ 192], 95.00th=[ 222], 00:24:44.756 | 99.00th=[ 255], 99.50th=[ 268], 99.90th=[ 284], 99.95th=[ 288], 00:24:44.756 | 99.99th=[ 305] 00:24:44.756 bw ( KiB/s): min=65536, max=417280, per=9.86%, avg=173772.80, stdev=120346.93, samples=20 00:24:44.756 iops : min= 256, max= 1630, avg=678.80, stdev=470.11, samples=20 00:24:44.756 lat (msec) : 20=0.03%, 50=40.83%, 100=22.31%, 250=35.48%, 500=1.34% 00:24:44.756 cpu : usr=0.42%, sys=2.31%, ctx=1280, majf=0, minf=4097 00:24:44.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:24:44.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.756 issued rwts: total=6852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.756 job9: (groupid=0, jobs=1): err= 0: pid=3828038: Wed Jul 24 09:10:20 2024 00:24:44.756 read: IOPS=637, BW=159MiB/s (167MB/s)(1611MiB/10113msec) 00:24:44.756 slat (usec): min=9, max=80137, avg=1069.87, stdev=4050.04 00:24:44.756 clat (msec): min=2, max=300, avg=99.25, stdev=44.00 00:24:44.756 lat (msec): min=2, max=300, avg=100.32, stdev=44.36 00:24:44.756 clat percentiles (msec): 00:24:44.756 | 1.00th=[ 21], 5.00th=[ 45], 10.00th=[ 56], 20.00th=[ 65], 00:24:44.756 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 100], 00:24:44.756 | 70.00th=[ 116], 80.00th=[ 136], 90.00th=[ 155], 95.00th=[ 180], 00:24:44.756 | 99.00th=[ 249], 99.50th=[ 268], 99.90th=[ 296], 99.95th=[ 296], 00:24:44.756 | 99.99th=[ 300] 00:24:44.756 bw ( KiB/s): min=78336, max=245248, per=9.27%, avg=163392.05, stdev=52508.51, samples=20 00:24:44.756 iops : min= 306, max= 958, avg=638.25, stdev=205.11, samples=20 00:24:44.756 lat (msec) : 4=0.06%, 10=0.16%, 20=0.73%, 50=6.16%, 100=53.69% 00:24:44.756 lat (msec) : 250=38.29%, 500=0.92% 00:24:44.756 cpu : usr=0.36%, sys=1.86%, ctx=1374, majf=0, minf=4097 00:24:44.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:44.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.756 issued rwts: total=6445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.756 job10: (groupid=0, jobs=1): err= 0: pid=3828039: Wed Jul 24 09:10:20 2024 00:24:44.756 read: IOPS=607, BW=152MiB/s (159MB/s)(1535MiB/10104msec) 00:24:44.756 slat (usec): min=9, max=165592, avg=1283.89, stdev=5730.79 00:24:44.756 clat (usec): min=1921, max=306687, avg=103918.60, stdev=63621.37 00:24:44.756 lat (usec): min=1945, max=436405, avg=105202.49, stdev=64608.90 00:24:44.756 clat percentiles (msec): 00:24:44.756 | 1.00th=[ 5], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 40], 00:24:44.756 | 30.00th=[ 67], 40.00th=[ 85], 50.00th=[ 100], 60.00th=[ 110], 00:24:44.756 | 70.00th=[ 125], 80.00th=[ 148], 90.00th=[ 205], 95.00th=[ 232], 00:24:44.756 | 99.00th=[ 271], 99.50th=[ 284], 99.90th=[ 296], 99.95th=[ 300], 00:24:44.756 | 99.99th=[ 309] 00:24:44.756 bw ( KiB/s): min=71680, max=394240, per=8.83%, avg=155581.55, stdev=72212.77, samples=20 00:24:44.756 iops : min= 280, max= 1540, avg=607.70, stdev=282.11, samples=20 00:24:44.756 lat (msec) : 2=0.02%, 4=0.65%, 10=2.49%, 20=1.30%, 50=19.71% 00:24:44.756 lat (msec) : 100=27.12%, 250=45.55%, 500=3.16% 00:24:44.756 cpu : usr=0.32%, sys=1.95%, ctx=1323, majf=0, minf=4097 00:24:44.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:24:44.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:44.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:44.756 issued rwts: total=6140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:44.756 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:44.756 00:24:44.756 Run status group 0 (all jobs): 00:24:44.756 READ: bw=1720MiB/s (1804MB/s), 127MiB/s-228MiB/s (133MB/s-239MB/s), io=17.0GiB (18.2GB), run=10047-10113msec 00:24:44.756 00:24:44.756 Disk stats (read/write): 00:24:44.756 nvme0n1: ios=10196/0, merge=0/0, ticks=1233326/0, in_queue=1233326, util=97.15% 00:24:44.756 nvme10n1: ios=18211/0, merge=0/0, ticks=1234989/0, in_queue=1234989, util=97.38% 00:24:44.756 nvme1n1: ios=15075/0, merge=0/0, ticks=1238975/0, in_queue=1238975, util=97.66% 00:24:44.756 nvme2n1: ios=12293/0, merge=0/0, ticks=1240153/0, in_queue=1240153, util=97.81% 00:24:44.756 nvme3n1: ios=10833/0, merge=0/0, ticks=1235094/0, in_queue=1235094, util=97.89% 00:24:44.756 nvme4n1: ios=11787/0, merge=0/0, ticks=1242121/0, in_queue=1242121, util=98.24% 00:24:44.756 nvme5n1: ios=9969/0, merge=0/0, ticks=1236397/0, in_queue=1236397, util=98.41% 00:24:44.756 nvme6n1: ios=10192/0, merge=0/0, ticks=1237758/0, in_queue=1237758, util=98.52% 00:24:44.756 nvme7n1: ios=13517/0, merge=0/0, ticks=1229692/0, in_queue=1229692, util=98.91% 00:24:44.756 nvme8n1: ios=12654/0, merge=0/0, ticks=1242446/0, in_queue=1242446, util=99.10% 00:24:44.756 nvme9n1: ios=12080/0, merge=0/0, ticks=1233556/0, in_queue=1233556, util=99.22% 00:24:44.756 09:10:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:24:44.756 [global] 00:24:44.756 thread=1 00:24:44.756 invalidate=1 00:24:44.756 rw=randwrite 00:24:44.756 time_based=1 00:24:44.756 runtime=10 00:24:44.756 ioengine=libaio 00:24:44.756 direct=1 00:24:44.756 bs=262144 00:24:44.756 iodepth=64 00:24:44.756 norandommap=1 00:24:44.756 numjobs=1 00:24:44.756 00:24:44.756 [job0] 00:24:44.756 filename=/dev/nvme0n1 00:24:44.756 [job1] 00:24:44.756 filename=/dev/nvme10n1 00:24:44.756 [job2] 00:24:44.756 filename=/dev/nvme1n1 00:24:44.756 [job3] 00:24:44.756 filename=/dev/nvme2n1 00:24:44.756 [job4] 00:24:44.756 filename=/dev/nvme3n1 00:24:44.756 [job5] 00:24:44.756 filename=/dev/nvme4n1 00:24:44.756 [job6] 00:24:44.756 filename=/dev/nvme5n1 00:24:44.756 [job7] 00:24:44.756 filename=/dev/nvme6n1 00:24:44.756 [job8] 00:24:44.756 filename=/dev/nvme7n1 00:24:44.756 [job9] 00:24:44.756 filename=/dev/nvme8n1 00:24:44.756 [job10] 00:24:44.756 filename=/dev/nvme9n1 00:24:44.756 Could not set queue depth (nvme0n1) 00:24:44.756 Could not set queue depth (nvme10n1) 00:24:44.756 Could not set queue depth (nvme1n1) 00:24:44.756 Could not set queue depth (nvme2n1) 00:24:44.756 Could not set queue depth (nvme3n1) 00:24:44.756 Could not set queue depth (nvme4n1) 00:24:44.756 Could not set queue depth (nvme5n1) 00:24:44.756 Could not set queue depth (nvme6n1) 00:24:44.756 Could not set queue depth (nvme7n1) 00:24:44.756 Could not set queue depth (nvme8n1) 00:24:44.756 Could not set queue depth (nvme9n1) 00:24:44.756 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.756 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.756 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:44.757 fio-3.35 00:24:44.757 Starting 11 threads 00:24:54.727 00:24:54.727 job0: (groupid=0, jobs=1): err= 0: pid=3829057: Wed Jul 24 09:10:31 2024 00:24:54.727 write: IOPS=420, BW=105MiB/s (110MB/s)(1060MiB/10082msec); 0 zone resets 00:24:54.727 slat (usec): min=21, max=43203, avg=1806.04, stdev=4542.30 00:24:54.727 clat (msec): min=2, max=371, avg=150.33, stdev=76.78 00:24:54.727 lat (msec): min=2, max=374, avg=152.14, stdev=77.89 00:24:54.727 clat percentiles (msec): 00:24:54.727 | 1.00th=[ 6], 5.00th=[ 20], 10.00th=[ 32], 20.00th=[ 77], 00:24:54.727 | 30.00th=[ 112], 40.00th=[ 130], 50.00th=[ 159], 60.00th=[ 182], 00:24:54.727 | 70.00th=[ 199], 80.00th=[ 220], 90.00th=[ 247], 95.00th=[ 266], 00:24:54.727 | 99.00th=[ 300], 99.50th=[ 305], 99.90th=[ 368], 99.95th=[ 372], 00:24:54.727 | 99.99th=[ 372] 00:24:54.727 bw ( KiB/s): min=59392, max=237568, per=7.93%, avg=106883.25, stdev=41577.86, samples=20 00:24:54.727 iops : min= 232, max= 928, avg=417.45, stdev=162.39, samples=20 00:24:54.727 lat (msec) : 4=0.33%, 10=1.75%, 20=3.26%, 50=10.21%, 100=10.12% 00:24:54.727 lat (msec) : 250=66.29%, 500=8.04% 00:24:54.727 cpu : usr=1.55%, sys=1.41%, ctx=2301, majf=0, minf=1 00:24:54.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:54.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.727 issued rwts: total=0,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.727 job1: (groupid=0, jobs=1): err= 0: pid=3829058: Wed Jul 24 09:10:31 2024 00:24:54.727 write: IOPS=407, BW=102MiB/s (107MB/s)(1034MiB/10156msec); 0 zone resets 00:24:54.727 slat (usec): min=18, max=104414, avg=1817.39, stdev=5439.09 00:24:54.727 clat (usec): min=985, max=449549, avg=155233.85, stdev=87233.28 00:24:54.727 lat (usec): min=1028, max=449606, avg=157051.24, stdev=88211.51 00:24:54.727 clat percentiles (msec): 00:24:54.727 | 1.00th=[ 8], 5.00th=[ 38], 10.00th=[ 53], 20.00th=[ 82], 00:24:54.727 | 30.00th=[ 115], 40.00th=[ 126], 50.00th=[ 140], 60.00th=[ 159], 00:24:54.727 | 70.00th=[ 182], 80.00th=[ 228], 90.00th=[ 271], 95.00th=[ 321], 00:24:54.727 | 99.00th=[ 409], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 451], 00:24:54.727 | 99.99th=[ 451] 00:24:54.727 bw ( KiB/s): min=40960, max=232960, per=7.73%, avg=104205.10, stdev=43962.01, samples=20 00:24:54.727 iops : min= 160, max= 910, avg=407.00, stdev=171.78, samples=20 00:24:54.727 lat (usec) : 1000=0.05% 00:24:54.727 lat (msec) : 2=0.24%, 4=0.19%, 10=0.87%, 20=1.40%, 50=5.30% 00:24:54.727 lat (msec) : 100=16.69%, 250=61.01%, 500=14.25% 00:24:54.727 cpu : usr=1.43%, sys=1.22%, ctx=1965, majf=0, minf=1 00:24:54.727 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:54.727 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.727 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.727 issued rwts: total=0,4134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.727 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.727 job2: (groupid=0, jobs=1): err= 0: pid=3829059: Wed Jul 24 09:10:31 2024 00:24:54.727 write: IOPS=506, BW=127MiB/s (133MB/s)(1284MiB/10150msec); 0 zone resets 00:24:54.727 slat (usec): min=18, max=108455, avg=1406.86, stdev=4254.12 00:24:54.727 clat (usec): min=871, max=358222, avg=125012.92, stdev=86664.73 00:24:54.727 lat (usec): min=952, max=376994, avg=126419.78, stdev=87768.53 00:24:54.727 clat percentiles (msec): 00:24:54.727 | 1.00th=[ 3], 5.00th=[ 10], 10.00th=[ 22], 20.00th=[ 39], 00:24:54.728 | 30.00th=[ 47], 40.00th=[ 82], 50.00th=[ 123], 60.00th=[ 148], 00:24:54.728 | 70.00th=[ 184], 80.00th=[ 213], 90.00th=[ 245], 95.00th=[ 268], 00:24:54.728 | 99.00th=[ 326], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 359], 00:24:54.728 | 99.99th=[ 359] 00:24:54.728 bw ( KiB/s): min=59904, max=376320, per=9.63%, avg=129868.80, stdev=81013.47, samples=20 00:24:54.728 iops : min= 234, max= 1470, avg=507.30, stdev=316.46, samples=20 00:24:54.728 lat (usec) : 1000=0.06% 00:24:54.728 lat (msec) : 2=0.43%, 4=1.44%, 10=3.15%, 20=3.76%, 50=22.22% 00:24:54.728 lat (msec) : 100=13.08%, 250=47.22%, 500=8.64% 00:24:54.728 cpu : usr=1.81%, sys=1.73%, ctx=2972, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,5136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job3: (groupid=0, jobs=1): err= 0: pid=3829072: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=411, BW=103MiB/s (108MB/s)(1047MiB/10167msec); 0 zone resets 00:24:54.728 slat (usec): min=21, max=107530, avg=1651.66, stdev=5382.87 00:24:54.728 clat (usec): min=1084, max=427656, avg=153551.01, stdev=93576.11 00:24:54.728 lat (usec): min=1185, max=447371, avg=155202.67, stdev=94815.43 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 31], 20.00th=[ 52], 00:24:54.728 | 30.00th=[ 82], 40.00th=[ 122], 50.00th=[ 169], 60.00th=[ 190], 00:24:54.728 | 70.00th=[ 207], 80.00th=[ 230], 90.00th=[ 284], 95.00th=[ 317], 00:24:54.728 | 99.00th=[ 347], 99.50th=[ 355], 99.90th=[ 376], 99.95th=[ 426], 00:24:54.728 | 99.99th=[ 426] 00:24:54.728 bw ( KiB/s): min=57344, max=194048, per=7.83%, avg=105590.40, stdev=31396.91, samples=20 00:24:54.728 iops : min= 224, max= 758, avg=412.35, stdev=122.58, samples=20 00:24:54.728 lat (msec) : 2=0.24%, 4=0.79%, 10=2.63%, 20=3.77%, 50=11.99% 00:24:54.728 lat (msec) : 100=14.45%, 250=50.57%, 500=15.57% 00:24:54.728 cpu : usr=1.32%, sys=1.64%, ctx=2660, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,4188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job4: (groupid=0, jobs=1): err= 0: pid=3829073: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=488, BW=122MiB/s (128MB/s)(1246MiB/10193msec); 0 zone resets 00:24:54.728 slat (usec): min=20, max=48214, avg=1413.01, stdev=4071.49 00:24:54.728 clat (usec): min=1599, max=447804, avg=129392.03, stdev=78669.49 00:24:54.728 lat (usec): min=1975, max=451601, avg=130805.04, stdev=79681.85 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 7], 5.00th=[ 20], 10.00th=[ 33], 20.00th=[ 56], 00:24:54.728 | 30.00th=[ 84], 40.00th=[ 111], 50.00th=[ 130], 60.00th=[ 146], 00:24:54.728 | 70.00th=[ 157], 80.00th=[ 180], 90.00th=[ 215], 95.00th=[ 271], 00:24:54.728 | 99.00th=[ 405], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 443], 00:24:54.728 | 99.99th=[ 447] 00:24:54.728 bw ( KiB/s): min=41472, max=239648, per=9.34%, avg=125866.10, stdev=46296.16, samples=20 00:24:54.728 iops : min= 162, max= 936, avg=491.65, stdev=180.83, samples=20 00:24:54.728 lat (msec) : 2=0.04%, 4=0.46%, 10=1.59%, 20=3.29%, 50=13.11% 00:24:54.728 lat (msec) : 100=17.38%, 250=58.05%, 500=6.08% 00:24:54.728 cpu : usr=1.66%, sys=1.80%, ctx=2833, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,4982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job5: (groupid=0, jobs=1): err= 0: pid=3829074: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=417, BW=104MiB/s (109MB/s)(1060MiB/10161msec); 0 zone resets 00:24:54.728 slat (usec): min=22, max=174477, avg=1596.08, stdev=5470.96 00:24:54.728 clat (usec): min=1669, max=445486, avg=151511.54, stdev=89021.03 00:24:54.728 lat (msec): min=2, max=449, avg=153.11, stdev=90.17 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 7], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 75], 00:24:54.728 | 30.00th=[ 111], 40.00th=[ 125], 50.00th=[ 142], 60.00th=[ 167], 00:24:54.728 | 70.00th=[ 188], 80.00th=[ 205], 90.00th=[ 262], 95.00th=[ 334], 00:24:54.728 | 99.00th=[ 414], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 443], 00:24:54.728 | 99.99th=[ 447] 00:24:54.728 bw ( KiB/s): min=39424, max=185856, per=7.93%, avg=106866.05, stdev=39993.66, samples=20 00:24:54.728 iops : min= 154, max= 726, avg=417.40, stdev=156.19, samples=20 00:24:54.728 lat (msec) : 2=0.02%, 4=0.21%, 10=1.30%, 20=2.48%, 50=10.62% 00:24:54.728 lat (msec) : 100=12.10%, 250=62.00%, 500=11.28% 00:24:54.728 cpu : usr=1.37%, sys=1.56%, ctx=2539, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,4239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job6: (groupid=0, jobs=1): err= 0: pid=3829075: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=584, BW=146MiB/s (153MB/s)(1474MiB/10081msec); 0 zone resets 00:24:54.728 slat (usec): min=18, max=96034, avg=1121.80, stdev=3701.67 00:24:54.728 clat (usec): min=1139, max=335963, avg=108221.39, stdev=72541.73 00:24:54.728 lat (usec): min=1182, max=335999, avg=109343.19, stdev=73361.29 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 4], 5.00th=[ 10], 10.00th=[ 19], 20.00th=[ 37], 00:24:54.728 | 30.00th=[ 50], 40.00th=[ 80], 50.00th=[ 110], 60.00th=[ 129], 00:24:54.728 | 70.00th=[ 146], 80.00th=[ 165], 90.00th=[ 203], 95.00th=[ 245], 00:24:54.728 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 330], 99.95th=[ 334], 00:24:54.728 | 99.99th=[ 338] 00:24:54.728 bw ( KiB/s): min=51200, max=326656, per=11.08%, avg=149285.55, stdev=71484.39, samples=20 00:24:54.728 iops : min= 200, max= 1276, avg=583.10, stdev=279.29, samples=20 00:24:54.728 lat (msec) : 2=0.27%, 4=1.02%, 10=3.85%, 20=5.61%, 50=20.76% 00:24:54.728 lat (msec) : 100=14.37%, 250=49.82%, 500=4.29% 00:24:54.728 cpu : usr=1.87%, sys=1.97%, ctx=3701, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,5895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job7: (groupid=0, jobs=1): err= 0: pid=3829076: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=522, BW=131MiB/s (137MB/s)(1327MiB/10157msec); 0 zone resets 00:24:54.728 slat (usec): min=18, max=103629, avg=1158.46, stdev=4074.07 00:24:54.728 clat (usec): min=1929, max=356708, avg=121227.46, stdev=80015.71 00:24:54.728 lat (usec): min=1982, max=357318, avg=122385.92, stdev=80943.57 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 7], 5.00th=[ 17], 10.00th=[ 23], 20.00th=[ 42], 00:24:54.728 | 30.00th=[ 65], 40.00th=[ 83], 50.00th=[ 115], 60.00th=[ 138], 00:24:54.728 | 70.00th=[ 167], 80.00th=[ 197], 90.00th=[ 232], 95.00th=[ 264], 00:24:54.728 | 99.00th=[ 330], 99.50th=[ 342], 99.90th=[ 351], 99.95th=[ 355], 00:24:54.728 | 99.99th=[ 359] 00:24:54.728 bw ( KiB/s): min=63488, max=256512, per=9.96%, avg=134247.05, stdev=54709.92, samples=20 00:24:54.728 iops : min= 248, max= 1002, avg=524.35, stdev=213.71, samples=20 00:24:54.728 lat (msec) : 2=0.02%, 4=0.13%, 10=2.34%, 20=4.67%, 50=17.18% 00:24:54.728 lat (msec) : 100=21.63%, 250=47.29%, 500=6.74% 00:24:54.728 cpu : usr=1.75%, sys=1.91%, ctx=3561, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,5308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job8: (groupid=0, jobs=1): err= 0: pid=3829077: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=430, BW=108MiB/s (113MB/s)(1095MiB/10164msec); 0 zone resets 00:24:54.728 slat (usec): min=21, max=134040, avg=1775.95, stdev=5134.93 00:24:54.728 clat (usec): min=1579, max=438980, avg=146635.28, stdev=88608.10 00:24:54.728 lat (usec): min=1637, max=439067, avg=148411.23, stdev=89827.92 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 20], 20.00th=[ 51], 00:24:54.728 | 30.00th=[ 91], 40.00th=[ 126], 50.00th=[ 159], 60.00th=[ 178], 00:24:54.728 | 70.00th=[ 192], 80.00th=[ 213], 90.00th=[ 257], 95.00th=[ 296], 00:24:54.728 | 99.00th=[ 351], 99.50th=[ 414], 99.90th=[ 426], 99.95th=[ 430], 00:24:54.728 | 99.99th=[ 439] 00:24:54.728 bw ( KiB/s): min=39424, max=193536, per=8.20%, avg=110488.00, stdev=44179.68, samples=20 00:24:54.728 iops : min= 154, max= 756, avg=431.50, stdev=172.52, samples=20 00:24:54.728 lat (msec) : 2=0.11%, 4=2.35%, 10=3.70%, 20=4.22%, 50=9.61% 00:24:54.728 lat (msec) : 100=11.78%, 250=57.19%, 500=11.03% 00:24:54.728 cpu : usr=1.46%, sys=1.61%, ctx=2402, majf=0, minf=1 00:24:54.728 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:54.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.728 issued rwts: total=0,4380,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.728 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.728 job9: (groupid=0, jobs=1): err= 0: pid=3829078: Wed Jul 24 09:10:31 2024 00:24:54.728 write: IOPS=649, BW=162MiB/s (170MB/s)(1654MiB/10194msec); 0 zone resets 00:24:54.728 slat (usec): min=19, max=47233, avg=1131.32, stdev=3214.02 00:24:54.728 clat (usec): min=1551, max=451051, avg=97403.97, stdev=72132.56 00:24:54.728 lat (usec): min=1632, max=451124, avg=98535.30, stdev=72946.75 00:24:54.728 clat percentiles (msec): 00:24:54.728 | 1.00th=[ 5], 5.00th=[ 23], 10.00th=[ 40], 20.00th=[ 51], 00:24:54.728 | 30.00th=[ 55], 40.00th=[ 58], 50.00th=[ 72], 60.00th=[ 93], 00:24:54.729 | 70.00th=[ 114], 80.00th=[ 138], 90.00th=[ 194], 95.00th=[ 232], 00:24:54.729 | 99.00th=[ 393], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 447], 00:24:54.729 | 99.99th=[ 451] 00:24:54.729 bw ( KiB/s): min=40960, max=306688, per=12.44%, avg=167709.75, stdev=69015.10, samples=20 00:24:54.729 iops : min= 160, max= 1198, avg=655.10, stdev=269.57, samples=20 00:24:54.729 lat (msec) : 2=0.05%, 4=0.56%, 10=1.25%, 20=2.46%, 50=14.22% 00:24:54.729 lat (msec) : 100=44.01%, 250=33.90%, 500=3.55% 00:24:54.729 cpu : usr=2.02%, sys=1.81%, ctx=3090, majf=0, minf=1 00:24:54.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:24:54.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.729 issued rwts: total=0,6617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.729 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.729 job10: (groupid=0, jobs=1): err= 0: pid=3829079: Wed Jul 24 09:10:31 2024 00:24:54.729 write: IOPS=447, BW=112MiB/s (117MB/s)(1139MiB/10191msec); 0 zone resets 00:24:54.729 slat (usec): min=17, max=41978, avg=1538.56, stdev=4143.22 00:24:54.729 clat (usec): min=1809, max=447748, avg=141491.27, stdev=78696.75 00:24:54.729 lat (usec): min=1843, max=449518, avg=143029.84, stdev=79665.52 00:24:54.729 clat percentiles (msec): 00:24:54.729 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 41], 20.00th=[ 71], 00:24:54.729 | 30.00th=[ 103], 40.00th=[ 128], 50.00th=[ 148], 60.00th=[ 159], 00:24:54.729 | 70.00th=[ 174], 80.00th=[ 192], 90.00th=[ 234], 95.00th=[ 275], 00:24:54.729 | 99.00th=[ 405], 99.50th=[ 426], 99.90th=[ 443], 99.95th=[ 447], 00:24:54.729 | 99.99th=[ 447] 00:24:54.729 bw ( KiB/s): min=41984, max=225280, per=8.53%, avg=115018.65, stdev=42702.55, samples=20 00:24:54.729 iops : min= 164, max= 880, avg=449.25, stdev=166.77, samples=20 00:24:54.729 lat (msec) : 2=0.02%, 4=1.36%, 10=3.51%, 20=2.11%, 50=5.49% 00:24:54.729 lat (msec) : 100=16.72%, 250=62.32%, 500=8.47% 00:24:54.729 cpu : usr=1.59%, sys=1.65%, ctx=2623, majf=0, minf=1 00:24:54.729 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:24:54.729 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:54.729 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:24:54.729 issued rwts: total=0,4557,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:54.729 latency : target=0, window=0, percentile=100.00%, depth=64 00:24:54.729 00:24:54.729 Run status group 0 (all jobs): 00:24:54.729 WRITE: bw=1316MiB/s (1380MB/s), 102MiB/s-162MiB/s (107MB/s-170MB/s), io=13.1GiB (14.1GB), run=10081-10194msec 00:24:54.729 00:24:54.729 Disk stats (read/write): 00:24:54.729 nvme0n1: ios=49/8243, merge=0/0, ticks=365/1218380, in_queue=1218745, util=97.71% 00:24:54.729 nvme10n1: ios=46/8063, merge=0/0, ticks=2402/1213737, in_queue=1216139, util=99.64% 00:24:54.729 nvme1n1: ios=47/9856, merge=0/0, ticks=1456/1218402, in_queue=1219858, util=99.89% 00:24:54.729 nvme2n1: ios=45/8200, merge=0/0, ticks=1024/1212370, in_queue=1213394, util=99.99% 00:24:54.729 nvme3n1: ios=49/9939, merge=0/0, ticks=2698/1242812, in_queue=1245510, util=100.00% 00:24:54.729 nvme4n1: ios=47/8300, merge=0/0, ticks=991/1215424, in_queue=1216415, util=99.97% 00:24:54.729 nvme5n1: ios=46/11557, merge=0/0, ticks=625/1223882, in_queue=1224507, util=100.00% 00:24:54.729 nvme6n1: ios=0/10434, merge=0/0, ticks=0/1219182, in_queue=1219182, util=98.39% 00:24:54.729 nvme7n1: ios=44/8587, merge=0/0, ticks=562/1213070, in_queue=1213632, util=100.00% 00:24:54.729 nvme8n1: ios=0/13208, merge=0/0, ticks=0/1243401, in_queue=1243401, util=99.00% 00:24:54.729 nvme9n1: ios=21/9093, merge=0/0, ticks=66/1245356, in_queue=1245422, util=99.45% 00:24:54.729 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:24:54.729 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:24:54.729 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.729 09:10:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:54.729 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK1 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK1 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:24:54.729 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK2 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK2 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:24:54.729 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK3 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK3 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.729 09:10:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:24:54.989 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK4 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK4 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:54.989 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:24:55.279 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK5 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK5 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.279 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:24:55.536 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK6 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK6 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:24:55.536 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK7 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK7 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.536 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:24:55.794 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK8 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK8 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:24:55.794 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK9 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK9 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.794 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:24:56.053 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK10 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK10 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:56.053 09:10:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:24:56.053 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1217 -- # local i=0 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1218 -- # grep -q -w SPDK11 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1225 -- # grep -q -w SPDK11 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1229 -- # return 0 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.053 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:56.311 rmmod nvme_tcp 00:24:56.311 rmmod nvme_fabrics 00:24:56.311 rmmod nvme_keyring 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 3823777 ']' 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 3823777 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 3823777 ']' 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 3823777 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3823777 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3823777' 00:24:56.311 killing process with pid 3823777 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 3823777 00:24:56.311 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 3823777 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:56.878 09:10:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:58.781 00:24:58.781 real 1m0.618s 00:24:58.781 user 3m14.708s 00:24:58.781 sys 0m26.431s 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:58.781 ************************************ 00:24:58.781 END TEST nvmf_multiconnection 00:24:58.781 ************************************ 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:58.781 ************************************ 00:24:58.781 START TEST nvmf_initiator_timeout 00:24:58.781 ************************************ 00:24:58.781 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:24:59.040 * Looking for test storage... 00:24:59.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:24:59.040 09:10:36 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:00.943 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:00.943 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:00.943 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:00.944 Found net devices under 0000:09:00.0: cvl_0_0 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:00.944 Found net devices under 0000:09:00.1: cvl_0_1 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:00.944 09:10:38 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:00.944 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:00.944 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:00.944 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:00.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:00.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:25:00.944 00:25:00.944 --- 10.0.0.2 ping statistics --- 00:25:00.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:00.944 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:25:00.944 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:01.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:01.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:25:01.203 00:25:01.203 --- 10.0.0.1 ping statistics --- 00:25:01.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:01.203 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=3832402 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 3832402 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 3832402 ']' 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.203 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.203 [2024-07-24 09:10:39.129227] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:25:01.203 [2024-07-24 09:10:39.129304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.203 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.203 [2024-07-24 09:10:39.165835] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:25:01.203 [2024-07-24 09:10:39.192934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.203 [2024-07-24 09:10:39.279961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.203 [2024-07-24 09:10:39.280013] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.203 [2024-07-24 09:10:39.280040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.203 [2024-07-24 09:10:39.280051] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.203 [2024-07-24 09:10:39.280061] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.203 [2024-07-24 09:10:39.280195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.203 [2024-07-24 09:10:39.280259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.203 [2024-07-24 09:10:39.280289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.203 [2024-07-24 09:10:39.280291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 Malloc0 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 Delay0 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 [2024-07-24 09:10:39.454279] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:01.462 [2024-07-24 09:10:39.482615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.462 09:10:39 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:02.029 09:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:02.029 09:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1196 -- # local i=0 00:25:02.029 09:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:25:02.029 09:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:25:02.029 09:10:40 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # sleep 2 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # return 0 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=3832795 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:04.554 09:10:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:04.554 [global] 00:25:04.554 thread=1 00:25:04.554 invalidate=1 00:25:04.554 rw=write 00:25:04.554 time_based=1 00:25:04.554 runtime=60 00:25:04.554 ioengine=libaio 00:25:04.554 direct=1 00:25:04.554 bs=4096 00:25:04.554 iodepth=1 00:25:04.554 norandommap=0 00:25:04.554 numjobs=1 00:25:04.554 00:25:04.554 verify_dump=1 00:25:04.554 verify_backlog=512 00:25:04.554 verify_state_save=0 00:25:04.554 do_verify=1 00:25:04.554 verify=crc32c-intel 00:25:04.554 [job0] 00:25:04.554 filename=/dev/nvme0n1 00:25:04.554 Could not set queue depth (nvme0n1) 00:25:04.554 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:04.554 fio-3.35 00:25:04.554 Starting 1 thread 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.080 true 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.080 true 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.080 true 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:07.080 true 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:07.080 09:10:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:10.357 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:10.357 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.357 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.357 true 00:25:10.357 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.358 true 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.358 true 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:10.358 true 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:10.358 09:10:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 3832795 00:26:06.625 00:26:06.625 job0: (groupid=0, jobs=1): err= 0: pid=3832900: Wed Jul 24 09:11:42 2024 00:26:06.625 read: IOPS=72, BW=292KiB/s (299kB/s)(17.1MiB/60008msec) 00:26:06.625 slat (usec): min=4, max=11437, avg=19.01, stdev=215.80 00:26:06.625 clat (usec): min=261, max=41114k, avg=13394.14, stdev=621344.54 00:26:06.625 lat (usec): min=266, max=41114k, avg=13413.14, stdev=621344.84 00:26:06.625 clat percentiles (usec): 00:26:06.625 | 1.00th=[ 273], 5.00th=[ 285], 10.00th=[ 293], 00:26:06.625 | 20.00th=[ 310], 30.00th=[ 330], 40.00th=[ 355], 00:26:06.625 | 50.00th=[ 375], 60.00th=[ 379], 70.00th=[ 392], 00:26:06.625 | 80.00th=[ 412], 90.00th=[ 545], 95.00th=[ 42206], 00:26:06.625 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[ 42206], 00:26:06.625 | 99.95th=[ 42206], 99.99th=[17112761] 00:26:06.625 write: IOPS=76, BW=307KiB/s (315kB/s)(18.0MiB/60008msec); 0 zone resets 00:26:06.625 slat (nsec): min=5499, max=80833, avg=11931.01, stdev=9288.27 00:26:06.625 clat (usec): min=183, max=1235, avg=256.45, stdev=63.58 00:26:06.625 lat (usec): min=190, max=1243, avg=268.38, stdev=69.98 00:26:06.625 clat percentiles (usec): 00:26:06.625 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 208], 00:26:06.625 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 243], 00:26:06.625 | 70.00th=[ 262], 80.00th=[ 314], 90.00th=[ 363], 95.00th=[ 392], 00:26:06.625 | 99.00th=[ 424], 99.50th=[ 433], 99.90th=[ 461], 99.95th=[ 490], 00:26:06.625 | 99.99th=[ 1237] 00:26:06.625 bw ( KiB/s): min= 472, max= 6648, per=100.00%, avg=4096.00, stdev=1667.81, samples=9 00:26:06.625 iops : min= 118, max= 1662, avg=1024.00, stdev=416.95, samples=9 00:26:06.625 lat (usec) : 250=34.23%, 500=59.68%, 750=1.76%, 1000=0.02% 00:26:06.625 lat (msec) : 2=0.01%, 50=4.30%, >=2000=0.01% 00:26:06.625 cpu : usr=0.11%, sys=0.26%, ctx=8989, majf=0, minf=2 00:26:06.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.625 issued rwts: total=4379,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:06.625 00:26:06.625 Run status group 0 (all jobs): 00:26:06.626 READ: bw=292KiB/s (299kB/s), 292KiB/s-292KiB/s (299kB/s-299kB/s), io=17.1MiB (17.9MB), run=60008-60008msec 00:26:06.626 WRITE: bw=307KiB/s (315kB/s), 307KiB/s-307KiB/s (315kB/s-315kB/s), io=18.0MiB (18.9MB), run=60008-60008msec 00:26:06.626 00:26:06.626 Disk stats (read/write): 00:26:06.626 nvme0n1: ios=4475/4608, merge=0/0, ticks=18546/1129, in_queue=19675, util=99.63% 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:06.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1217 -- # local i=0 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1229 -- # return 0 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:06.626 nvmf hotplug test: fio successful as expected 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:06.626 rmmod nvme_tcp 00:26:06.626 rmmod nvme_fabrics 00:26:06.626 rmmod nvme_keyring 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 3832402 ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 3832402 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 3832402 ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 3832402 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3832402 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3832402' 00:26:06.626 killing process with pid 3832402 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 3832402 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 3832402 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:06.626 09:11:42 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.885 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.885 00:26:06.885 real 1m8.105s 00:26:06.885 user 4m10.772s 00:26:06.885 sys 0m6.273s 00:26:06.885 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:06.885 09:11:44 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:06.885 ************************************ 00:26:06.885 END TEST nvmf_initiator_timeout 00:26:06.885 ************************************ 00:26:07.144 09:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:26:07.144 09:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:26:07.144 09:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:26:07.145 09:11:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.145 09:11:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:09.048 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:09.048 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:09.048 Found net devices under 0000:09:00.0: cvl_0_0 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:09.048 Found net devices under 0000:09:00.1: cvl_0_1 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:09.048 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:09.049 ************************************ 00:26:09.049 START TEST nvmf_perf_adq 00:26:09.049 ************************************ 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:09.049 * Looking for test storage... 00:26:09.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.049 09:11:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:10.952 09:11:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:10.952 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:10.952 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:10.952 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:10.953 Found net devices under 0000:09:00.0: cvl_0_0 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:10.953 Found net devices under 0000:09:00.1: cvl_0_1 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:10.953 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:11.891 09:11:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:13.802 09:11:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.134 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:19.134 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:19.135 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:19.135 Found net devices under 0000:09:00.0: cvl_0_0 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:19.135 Found net devices under 0000:09:00.1: cvl_0_1 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:19.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:26:19.135 00:26:19.135 --- 10.0.0.2 ping statistics --- 00:26:19.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.135 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:19.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:26:19.135 00:26:19.135 --- 10.0.0.1 ping statistics --- 00:26:19.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.135 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3844422 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3844422 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3844422 ']' 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:19.135 09:11:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.135 [2024-07-24 09:11:56.856474] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:26:19.135 [2024-07-24 09:11:56.856549] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.135 EAL: No free 2048 kB hugepages reported on node 1 00:26:19.135 [2024-07-24 09:11:56.894203] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:19.135 [2024-07-24 09:11:56.919951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.135 [2024-07-24 09:11:57.011056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.135 [2024-07-24 09:11:57.011130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.135 [2024-07-24 09:11:57.011146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.135 [2024-07-24 09:11:57.011157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.135 [2024-07-24 09:11:57.011167] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.135 [2024-07-24 09:11:57.011220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.135 [2024-07-24 09:11:57.011281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.135 [2024-07-24 09:11:57.011346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:19.135 [2024-07-24 09:11:57.011349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.136 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.396 [2024-07-24 09:11:57.260645] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.396 Malloc1 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:19.396 [2024-07-24 09:11:57.314420] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3844449 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:19.396 09:11:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:19.396 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:21.311 "tick_rate": 2700000000, 00:26:21.311 "poll_groups": [ 00:26:21.311 { 00:26:21.311 "name": "nvmf_tgt_poll_group_000", 00:26:21.311 "admin_qpairs": 1, 00:26:21.311 "io_qpairs": 1, 00:26:21.311 "current_admin_qpairs": 1, 00:26:21.311 "current_io_qpairs": 1, 00:26:21.311 "pending_bdev_io": 0, 00:26:21.311 "completed_nvme_io": 19909, 00:26:21.311 "transports": [ 00:26:21.311 { 00:26:21.311 "trtype": "TCP" 00:26:21.311 } 00:26:21.311 ] 00:26:21.311 }, 00:26:21.311 { 00:26:21.311 "name": "nvmf_tgt_poll_group_001", 00:26:21.311 "admin_qpairs": 0, 00:26:21.311 "io_qpairs": 1, 00:26:21.311 "current_admin_qpairs": 0, 00:26:21.311 "current_io_qpairs": 1, 00:26:21.311 "pending_bdev_io": 0, 00:26:21.311 "completed_nvme_io": 17646, 00:26:21.311 "transports": [ 00:26:21.311 { 00:26:21.311 "trtype": "TCP" 00:26:21.311 } 00:26:21.311 ] 00:26:21.311 }, 00:26:21.311 { 00:26:21.311 "name": "nvmf_tgt_poll_group_002", 00:26:21.311 "admin_qpairs": 0, 00:26:21.311 "io_qpairs": 1, 00:26:21.311 "current_admin_qpairs": 0, 00:26:21.311 "current_io_qpairs": 1, 00:26:21.311 "pending_bdev_io": 0, 00:26:21.311 "completed_nvme_io": 21121, 00:26:21.311 "transports": [ 00:26:21.311 { 00:26:21.311 "trtype": "TCP" 00:26:21.311 } 00:26:21.311 ] 00:26:21.311 }, 00:26:21.311 { 00:26:21.311 "name": "nvmf_tgt_poll_group_003", 00:26:21.311 "admin_qpairs": 0, 00:26:21.311 "io_qpairs": 1, 00:26:21.311 "current_admin_qpairs": 0, 00:26:21.311 "current_io_qpairs": 1, 00:26:21.311 "pending_bdev_io": 0, 00:26:21.311 "completed_nvme_io": 20241, 00:26:21.311 "transports": [ 00:26:21.311 { 00:26:21.311 "trtype": "TCP" 00:26:21.311 } 00:26:21.311 ] 00:26:21.311 } 00:26:21.311 ] 00:26:21.311 }' 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:21.311 09:11:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3844449 00:26:29.426 Initializing NVMe Controllers 00:26:29.426 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:29.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:29.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:29.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:29.426 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:29.426 Initialization complete. Launching workers. 00:26:29.426 ======================================================== 00:26:29.426 Latency(us) 00:26:29.426 Device Information : IOPS MiB/s Average min max 00:26:29.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10914.90 42.64 5864.22 2475.12 9120.14 00:26:29.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 9491.90 37.08 6744.20 2562.50 11323.76 00:26:29.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11367.00 44.40 5630.00 2929.61 7297.60 00:26:29.426 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10792.50 42.16 5932.01 1930.38 9716.38 00:26:29.426 ======================================================== 00:26:29.426 Total : 42566.29 166.27 6015.09 1930.38 11323.76 00:26:29.426 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:29.426 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:29.426 rmmod nvme_tcp 00:26:29.426 rmmod nvme_fabrics 00:26:29.426 rmmod nvme_keyring 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3844422 ']' 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3844422 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3844422 ']' 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3844422 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3844422 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3844422' 00:26:29.684 killing process with pid 3844422 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3844422 00:26:29.684 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3844422 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:29.941 09:12:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:31.842 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:31.842 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:31.842 09:12:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:32.778 09:12:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:34.707 09:12:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:39.974 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:39.974 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.974 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:39.975 Found net devices under 0000:09:00.0: cvl_0_0 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:39.975 Found net devices under 0000:09:00.1: cvl_0_1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:39.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:39.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:26:39.975 00:26:39.975 --- 10.0.0.2 ping statistics --- 00:26:39.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.975 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:39.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:39.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:39.975 00:26:39.975 --- 10.0.0.1 ping statistics --- 00:26:39.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:39.975 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:26:39.975 net.core.busy_poll = 1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:26:39.975 net.core.busy_read = 1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3847687 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3847687 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3847687 ']' 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.975 09:12:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.975 [2024-07-24 09:12:17.963603] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:26:39.975 [2024-07-24 09:12:17.963709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.975 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.975 [2024-07-24 09:12:18.001636] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:39.975 [2024-07-24 09:12:18.033997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.234 [2024-07-24 09:12:18.127122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.234 [2024-07-24 09:12:18.127183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.234 [2024-07-24 09:12:18.127200] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.234 [2024-07-24 09:12:18.127216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.234 [2024-07-24 09:12:18.127228] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.234 [2024-07-24 09:12:18.127284] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.234 [2024-07-24 09:12:18.127335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.234 [2024-07-24 09:12:18.127454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.234 [2024-07-24 09:12:18.127456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.234 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.234 [2024-07-24 09:12:18.346522] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.491 Malloc1 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.491 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.492 [2024-07-24 09:12:18.400385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3847831 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:40.492 09:12:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:26:40.492 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.390 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:26:42.390 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:26:42.391 "tick_rate": 2700000000, 00:26:42.391 "poll_groups": [ 00:26:42.391 { 00:26:42.391 "name": "nvmf_tgt_poll_group_000", 00:26:42.391 "admin_qpairs": 1, 00:26:42.391 "io_qpairs": 3, 00:26:42.391 "current_admin_qpairs": 1, 00:26:42.391 "current_io_qpairs": 3, 00:26:42.391 "pending_bdev_io": 0, 00:26:42.391 "completed_nvme_io": 28023, 00:26:42.391 "transports": [ 00:26:42.391 { 00:26:42.391 "trtype": "TCP" 00:26:42.391 } 00:26:42.391 ] 00:26:42.391 }, 00:26:42.391 { 00:26:42.391 "name": "nvmf_tgt_poll_group_001", 00:26:42.391 "admin_qpairs": 0, 00:26:42.391 "io_qpairs": 1, 00:26:42.391 "current_admin_qpairs": 0, 00:26:42.391 "current_io_qpairs": 1, 00:26:42.391 "pending_bdev_io": 0, 00:26:42.391 "completed_nvme_io": 21436, 00:26:42.391 "transports": [ 00:26:42.391 { 00:26:42.391 "trtype": "TCP" 00:26:42.391 } 00:26:42.391 ] 00:26:42.391 }, 00:26:42.391 { 00:26:42.391 "name": "nvmf_tgt_poll_group_002", 00:26:42.391 "admin_qpairs": 0, 00:26:42.391 "io_qpairs": 0, 00:26:42.391 "current_admin_qpairs": 0, 00:26:42.391 "current_io_qpairs": 0, 00:26:42.391 "pending_bdev_io": 0, 00:26:42.391 "completed_nvme_io": 0, 00:26:42.391 "transports": [ 00:26:42.391 { 00:26:42.391 "trtype": "TCP" 00:26:42.391 } 00:26:42.391 ] 00:26:42.391 }, 00:26:42.391 { 00:26:42.391 "name": "nvmf_tgt_poll_group_003", 00:26:42.391 "admin_qpairs": 0, 00:26:42.391 "io_qpairs": 0, 00:26:42.391 "current_admin_qpairs": 0, 00:26:42.391 "current_io_qpairs": 0, 00:26:42.391 "pending_bdev_io": 0, 00:26:42.391 "completed_nvme_io": 0, 00:26:42.391 "transports": [ 00:26:42.391 { 00:26:42.391 "trtype": "TCP" 00:26:42.391 } 00:26:42.391 ] 00:26:42.391 } 00:26:42.391 ] 00:26:42.391 }' 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:26:42.391 09:12:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3847831 00:26:50.501 Initializing NVMe Controllers 00:26:50.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:50.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:50.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:50.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:50.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:50.501 Initialization complete. Launching workers. 00:26:50.501 ======================================================== 00:26:50.501 Latency(us) 00:26:50.501 Device Information : IOPS MiB/s Average min max 00:26:50.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5185.70 20.26 12344.56 1941.07 60215.60 00:26:50.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5592.70 21.85 11447.64 1670.10 57894.34 00:26:50.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4237.80 16.55 15106.85 2528.51 59251.17 00:26:50.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11602.20 45.32 5518.02 2018.02 9226.59 00:26:50.501 ======================================================== 00:26:50.501 Total : 26618.40 103.98 9620.39 1670.10 60215.60 00:26:50.501 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:50.501 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:50.501 rmmod nvme_tcp 00:26:50.501 rmmod nvme_fabrics 00:26:50.759 rmmod nvme_keyring 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3847687 ']' 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3847687 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3847687 ']' 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3847687 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3847687 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3847687' 00:26:50.759 killing process with pid 3847687 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3847687 00:26:50.759 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3847687 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:51.019 09:12:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:26:52.922 00:26:52.922 real 0m44.035s 00:26:52.922 user 2m33.237s 00:26:52.922 sys 0m12.243s 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:52.922 ************************************ 00:26:52.922 END TEST nvmf_perf_adq 00:26:52.922 ************************************ 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:52.922 09:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:52.922 ************************************ 00:26:52.922 START TEST nvmf_shutdown 00:26:52.922 ************************************ 00:26:52.922 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:26:53.182 * Looking for test storage... 00:26:53.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:53.182 ************************************ 00:26:53.182 START TEST nvmf_shutdown_tc1 00:26:53.182 ************************************ 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.182 09:12:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:55.084 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:55.084 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:55.085 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:55.085 Found net devices under 0000:09:00.0: cvl_0_0 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:55.085 Found net devices under 0000:09:00.1: cvl_0_1 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:26:55.085 00:26:55.085 --- 10.0.0.2 ping statistics --- 00:26:55.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.085 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:55.085 00:26:55.085 --- 10.0.0.1 ping statistics --- 00:26:55.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.085 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3850984 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3850984 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3850984 ']' 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.085 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.344 [2024-07-24 09:12:33.231060] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:26:55.344 [2024-07-24 09:12:33.231146] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.344 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.344 [2024-07-24 09:12:33.270390] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:55.344 [2024-07-24 09:12:33.298368] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.344 [2024-07-24 09:12:33.385585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.344 [2024-07-24 09:12:33.385638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.344 [2024-07-24 09:12:33.385665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.344 [2024-07-24 09:12:33.385677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.344 [2024-07-24 09:12:33.385686] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.344 [2024-07-24 09:12:33.385772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.344 [2024-07-24 09:12:33.385837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.344 [2024-07-24 09:12:33.389121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:26:55.344 [2024-07-24 09:12:33.389132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.602 [2024-07-24 09:12:33.542323] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.602 09:12:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:55.602 Malloc1 00:26:55.602 [2024-07-24 09:12:33.631733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.602 Malloc2 00:26:55.602 Malloc3 00:26:55.860 Malloc4 00:26:55.860 Malloc5 00:26:55.860 Malloc6 00:26:55.860 Malloc7 00:26:55.860 Malloc8 00:26:56.119 Malloc9 00:26:56.119 Malloc10 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3851046 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3851046 /var/tmp/bdevperf.sock 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3851046 ']' 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:56.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.119 "hdgst": ${hdgst:-false}, 00:26:56.119 "ddgst": ${ddgst:-false} 00:26:56.119 }, 00:26:56.119 "method": "bdev_nvme_attach_controller" 00:26:56.119 } 00:26:56.119 EOF 00:26:56.119 )") 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.119 "hdgst": ${hdgst:-false}, 00:26:56.119 "ddgst": ${ddgst:-false} 00:26:56.119 }, 00:26:56.119 "method": "bdev_nvme_attach_controller" 00:26:56.119 } 00:26:56.119 EOF 00:26:56.119 )") 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.119 "hdgst": ${hdgst:-false}, 00:26:56.119 "ddgst": ${ddgst:-false} 00:26:56.119 }, 00:26:56.119 "method": "bdev_nvme_attach_controller" 00:26:56.119 } 00:26:56.119 EOF 00:26:56.119 )") 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.119 "hdgst": ${hdgst:-false}, 00:26:56.119 "ddgst": ${ddgst:-false} 00:26:56.119 }, 00:26:56.119 "method": "bdev_nvme_attach_controller" 00:26:56.119 } 00:26:56.119 EOF 00:26:56.119 )") 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.119 "hdgst": ${hdgst:-false}, 00:26:56.119 "ddgst": ${ddgst:-false} 00:26:56.119 }, 00:26:56.119 "method": "bdev_nvme_attach_controller" 00:26:56.119 } 00:26:56.119 EOF 00:26:56.119 )") 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.119 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.119 "hdgst": ${hdgst:-false}, 00:26:56.119 "ddgst": ${ddgst:-false} 00:26:56.119 }, 00:26:56.119 "method": "bdev_nvme_attach_controller" 00:26:56.119 } 00:26:56.119 EOF 00:26:56.119 )") 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.119 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.119 { 00:26:56.119 "params": { 00:26:56.119 "name": "Nvme$subsystem", 00:26:56.119 "trtype": "$TEST_TRANSPORT", 00:26:56.119 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.119 "adrfam": "ipv4", 00:26:56.119 "trsvcid": "$NVMF_PORT", 00:26:56.119 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.120 "hdgst": ${hdgst:-false}, 00:26:56.120 "ddgst": ${ddgst:-false} 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 } 00:26:56.120 EOF 00:26:56.120 )") 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.120 { 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme$subsystem", 00:26:56.120 "trtype": "$TEST_TRANSPORT", 00:26:56.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "$NVMF_PORT", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.120 "hdgst": ${hdgst:-false}, 00:26:56.120 "ddgst": ${ddgst:-false} 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 } 00:26:56.120 EOF 00:26:56.120 )") 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.120 { 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme$subsystem", 00:26:56.120 "trtype": "$TEST_TRANSPORT", 00:26:56.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "$NVMF_PORT", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.120 "hdgst": ${hdgst:-false}, 00:26:56.120 "ddgst": ${ddgst:-false} 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 } 00:26:56.120 EOF 00:26:56.120 )") 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:56.120 { 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme$subsystem", 00:26:56.120 "trtype": "$TEST_TRANSPORT", 00:26:56.120 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "$NVMF_PORT", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:56.120 "hdgst": ${hdgst:-false}, 00:26:56.120 "ddgst": ${ddgst:-false} 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 } 00:26:56.120 EOF 00:26:56.120 )") 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:56.120 09:12:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme1", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme2", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme3", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme4", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme5", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme6", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme7", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme8", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme9", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 },{ 00:26:56.120 "params": { 00:26:56.120 "name": "Nvme10", 00:26:56.120 "trtype": "tcp", 00:26:56.120 "traddr": "10.0.0.2", 00:26:56.120 "adrfam": "ipv4", 00:26:56.120 "trsvcid": "4420", 00:26:56.120 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:56.120 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:56.120 "hdgst": false, 00:26:56.120 "ddgst": false 00:26:56.120 }, 00:26:56.120 "method": "bdev_nvme_attach_controller" 00:26:56.120 }' 00:26:56.120 [2024-07-24 09:12:34.152943] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:26:56.120 [2024-07-24 09:12:34.153037] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:26:56.120 EAL: No free 2048 kB hugepages reported on node 1 00:26:56.120 [2024-07-24 09:12:34.190620] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:56.120 [2024-07-24 09:12:34.220184] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.378 [2024-07-24 09:12:34.306998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3851046 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:26:58.276 09:12:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:26:59.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3851046 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3850984 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.209 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.209 { 00:26:59.209 "params": { 00:26:59.209 "name": "Nvme$subsystem", 00:26:59.209 "trtype": "$TEST_TRANSPORT", 00:26:59.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.209 "adrfam": "ipv4", 00:26:59.209 "trsvcid": "$NVMF_PORT", 00:26:59.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.209 "hdgst": ${hdgst:-false}, 00:26:59.209 "ddgst": ${ddgst:-false} 00:26:59.209 }, 00:26:59.209 "method": "bdev_nvme_attach_controller" 00:26:59.209 } 00:26:59.209 EOF 00:26:59.209 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:59.210 { 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme$subsystem", 00:26:59.210 "trtype": "$TEST_TRANSPORT", 00:26:59.210 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "$NVMF_PORT", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:59.210 "hdgst": ${hdgst:-false}, 00:26:59.210 "ddgst": ${ddgst:-false} 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 } 00:26:59.210 EOF 00:26:59.210 )") 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:26:59.210 09:12:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme1", 00:26:59.210 "trtype": "tcp", 00:26:59.210 "traddr": "10.0.0.2", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "4420", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:59.210 "hdgst": false, 00:26:59.210 "ddgst": false 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 },{ 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme2", 00:26:59.210 "trtype": "tcp", 00:26:59.210 "traddr": "10.0.0.2", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "4420", 00:26:59.210 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:59.210 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:59.210 "hdgst": false, 00:26:59.210 "ddgst": false 00:26:59.210 }, 00:26:59.210 "method": "bdev_nvme_attach_controller" 00:26:59.210 },{ 00:26:59.210 "params": { 00:26:59.210 "name": "Nvme3", 00:26:59.210 "trtype": "tcp", 00:26:59.210 "traddr": "10.0.0.2", 00:26:59.210 "adrfam": "ipv4", 00:26:59.210 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme4", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme5", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme6", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme7", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme8", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme9", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 },{ 00:26:59.211 "params": { 00:26:59.211 "name": "Nvme10", 00:26:59.211 "trtype": "tcp", 00:26:59.211 "traddr": "10.0.0.2", 00:26:59.211 "adrfam": "ipv4", 00:26:59.211 "trsvcid": "4420", 00:26:59.211 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:26:59.211 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:26:59.211 "hdgst": false, 00:26:59.211 "ddgst": false 00:26:59.211 }, 00:26:59.211 "method": "bdev_nvme_attach_controller" 00:26:59.211 }' 00:26:59.211 [2024-07-24 09:12:37.215969] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:26:59.211 [2024-07-24 09:12:37.216048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3851467 ] 00:26:59.211 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.211 [2024-07-24 09:12:37.251801] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:26:59.211 [2024-07-24 09:12:37.280566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.469 [2024-07-24 09:12:37.366886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.871 Running I/O for 1 seconds... 00:27:02.245 00:27:02.245 Latency(us) 00:27:02.245 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:02.245 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme1n1 : 1.09 234.04 14.63 0.00 0.00 270647.94 18058.81 251658.24 00:27:02.245 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme2n1 : 1.12 228.57 14.29 0.00 0.00 272738.61 20388.98 253211.69 00:27:02.245 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme3n1 : 1.05 243.75 15.23 0.00 0.00 250663.82 16893.72 251658.24 00:27:02.245 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme4n1 : 1.16 220.03 13.75 0.00 0.00 274444.89 28156.21 260978.92 00:27:02.245 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme5n1 : 1.12 229.42 14.34 0.00 0.00 257892.69 21554.06 253211.69 00:27:02.245 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme6n1 : 1.11 231.28 14.45 0.00 0.00 251243.71 20777.34 250104.79 00:27:02.245 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme7n1 : 1.18 272.03 17.00 0.00 0.00 211190.37 14563.56 237677.23 00:27:02.245 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme8n1 : 1.18 216.48 13.53 0.00 0.00 261154.51 25437.68 270299.59 00:27:02.245 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme9n1 : 1.19 269.46 16.84 0.00 0.00 206083.57 16214.09 248551.35 00:27:02.245 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:02.245 Verification LBA range: start 0x0 length 0x400 00:27:02.245 Nvme10n1 : 1.19 268.32 16.77 0.00 0.00 203463.26 8543.95 274959.93 00:27:02.245 =================================================================================================================== 00:27:02.245 Total : 2413.38 150.84 0.00 0.00 243228.62 8543.95 274959.93 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:02.504 rmmod nvme_tcp 00:27:02.504 rmmod nvme_fabrics 00:27:02.504 rmmod nvme_keyring 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3850984 ']' 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3850984 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3850984 ']' 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3850984 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3850984 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3850984' 00:27:02.504 killing process with pid 3850984 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3850984 00:27:02.504 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3850984 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.072 09:12:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:04.977 00:27:04.977 real 0m11.863s 00:27:04.977 user 0m34.663s 00:27:04.977 sys 0m3.304s 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:04.977 ************************************ 00:27:04.977 END TEST nvmf_shutdown_tc1 00:27:04.977 ************************************ 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:04.977 09:12:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:04.977 ************************************ 00:27:04.977 START TEST nvmf_shutdown_tc2 00:27:04.977 ************************************ 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.977 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:04.978 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:04.978 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:04.978 Found net devices under 0000:09:00.0: cvl_0_0 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:04.978 Found net devices under 0000:09:00.1: cvl_0_1 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.978 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.979 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.979 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:05.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:05.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.103 ms 00:27:05.236 00:27:05.236 --- 10.0.0.2 ping statistics --- 00:27:05.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.236 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:05.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:05.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:27:05.236 00:27:05.236 --- 10.0.0.1 ping statistics --- 00:27:05.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:05.236 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3852229 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3852229 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3852229 ']' 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:05.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.236 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.236 [2024-07-24 09:12:43.214052] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:05.236 [2024-07-24 09:12:43.214165] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:05.236 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.236 [2024-07-24 09:12:43.254056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:05.236 [2024-07-24 09:12:43.284846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.494 [2024-07-24 09:12:43.379627] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.494 [2024-07-24 09:12:43.379681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.494 [2024-07-24 09:12:43.379705] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.494 [2024-07-24 09:12:43.379720] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.494 [2024-07-24 09:12:43.379731] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.494 [2024-07-24 09:12:43.379834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.494 [2024-07-24 09:12:43.379940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.494 [2024-07-24 09:12:43.379993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:05.494 [2024-07-24 09:12:43.379995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.494 [2024-07-24 09:12:43.544731] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.494 09:12:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:05.494 Malloc1 00:27:05.752 [2024-07-24 09:12:43.624017] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.752 Malloc2 00:27:05.752 Malloc3 00:27:05.752 Malloc4 00:27:05.752 Malloc5 00:27:05.752 Malloc6 00:27:06.010 Malloc7 00:27:06.010 Malloc8 00:27:06.010 Malloc9 00:27:06.010 Malloc10 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3852408 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3852408 /var/tmp/bdevperf.sock 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3852408 ']' 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:06.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.010 { 00:27:06.010 "params": { 00:27:06.010 "name": "Nvme$subsystem", 00:27:06.010 "trtype": "$TEST_TRANSPORT", 00:27:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.010 "adrfam": "ipv4", 00:27:06.010 "trsvcid": "$NVMF_PORT", 00:27:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.010 "hdgst": ${hdgst:-false}, 00:27:06.010 "ddgst": ${ddgst:-false} 00:27:06.010 }, 00:27:06.010 "method": "bdev_nvme_attach_controller" 00:27:06.010 } 00:27:06.010 EOF 00:27:06.010 )") 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.010 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.010 { 00:27:06.010 "params": { 00:27:06.010 "name": "Nvme$subsystem", 00:27:06.010 "trtype": "$TEST_TRANSPORT", 00:27:06.010 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.010 "adrfam": "ipv4", 00:27:06.010 "trsvcid": "$NVMF_PORT", 00:27:06.010 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.010 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.010 "hdgst": ${hdgst:-false}, 00:27:06.010 "ddgst": ${ddgst:-false} 00:27:06.010 }, 00:27:06.010 "method": "bdev_nvme_attach_controller" 00:27:06.011 } 00:27:06.011 EOF 00:27:06.011 )") 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.011 { 00:27:06.011 "params": { 00:27:06.011 "name": "Nvme$subsystem", 00:27:06.011 "trtype": "$TEST_TRANSPORT", 00:27:06.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.011 "adrfam": "ipv4", 00:27:06.011 "trsvcid": "$NVMF_PORT", 00:27:06.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.011 "hdgst": ${hdgst:-false}, 00:27:06.011 "ddgst": ${ddgst:-false} 00:27:06.011 }, 00:27:06.011 "method": "bdev_nvme_attach_controller" 00:27:06.011 } 00:27:06.011 EOF 00:27:06.011 )") 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.011 { 00:27:06.011 "params": { 00:27:06.011 "name": "Nvme$subsystem", 00:27:06.011 "trtype": "$TEST_TRANSPORT", 00:27:06.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.011 "adrfam": "ipv4", 00:27:06.011 "trsvcid": "$NVMF_PORT", 00:27:06.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.011 "hdgst": ${hdgst:-false}, 00:27:06.011 "ddgst": ${ddgst:-false} 00:27:06.011 }, 00:27:06.011 "method": "bdev_nvme_attach_controller" 00:27:06.011 } 00:27:06.011 EOF 00:27:06.011 )") 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.011 { 00:27:06.011 "params": { 00:27:06.011 "name": "Nvme$subsystem", 00:27:06.011 "trtype": "$TEST_TRANSPORT", 00:27:06.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.011 "adrfam": "ipv4", 00:27:06.011 "trsvcid": "$NVMF_PORT", 00:27:06.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.011 "hdgst": ${hdgst:-false}, 00:27:06.011 "ddgst": ${ddgst:-false} 00:27:06.011 }, 00:27:06.011 "method": "bdev_nvme_attach_controller" 00:27:06.011 } 00:27:06.011 EOF 00:27:06.011 )") 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.011 { 00:27:06.011 "params": { 00:27:06.011 "name": "Nvme$subsystem", 00:27:06.011 "trtype": "$TEST_TRANSPORT", 00:27:06.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.011 "adrfam": "ipv4", 00:27:06.011 "trsvcid": "$NVMF_PORT", 00:27:06.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.011 "hdgst": ${hdgst:-false}, 00:27:06.011 "ddgst": ${ddgst:-false} 00:27:06.011 }, 00:27:06.011 "method": "bdev_nvme_attach_controller" 00:27:06.011 } 00:27:06.011 EOF 00:27:06.011 )") 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.011 { 00:27:06.011 "params": { 00:27:06.011 "name": "Nvme$subsystem", 00:27:06.011 "trtype": "$TEST_TRANSPORT", 00:27:06.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.011 "adrfam": "ipv4", 00:27:06.011 "trsvcid": "$NVMF_PORT", 00:27:06.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.011 "hdgst": ${hdgst:-false}, 00:27:06.011 "ddgst": ${ddgst:-false} 00:27:06.011 }, 00:27:06.011 "method": "bdev_nvme_attach_controller" 00:27:06.011 } 00:27:06.011 EOF 00:27:06.011 )") 00:27:06.011 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.269 { 00:27:06.269 "params": { 00:27:06.269 "name": "Nvme$subsystem", 00:27:06.269 "trtype": "$TEST_TRANSPORT", 00:27:06.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.269 "adrfam": "ipv4", 00:27:06.269 "trsvcid": "$NVMF_PORT", 00:27:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.269 "hdgst": ${hdgst:-false}, 00:27:06.269 "ddgst": ${ddgst:-false} 00:27:06.269 }, 00:27:06.269 "method": "bdev_nvme_attach_controller" 00:27:06.269 } 00:27:06.269 EOF 00:27:06.269 )") 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.269 { 00:27:06.269 "params": { 00:27:06.269 "name": "Nvme$subsystem", 00:27:06.269 "trtype": "$TEST_TRANSPORT", 00:27:06.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.269 "adrfam": "ipv4", 00:27:06.269 "trsvcid": "$NVMF_PORT", 00:27:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.269 "hdgst": ${hdgst:-false}, 00:27:06.269 "ddgst": ${ddgst:-false} 00:27:06.269 }, 00:27:06.269 "method": "bdev_nvme_attach_controller" 00:27:06.269 } 00:27:06.269 EOF 00:27:06.269 )") 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:06.269 { 00:27:06.269 "params": { 00:27:06.269 "name": "Nvme$subsystem", 00:27:06.269 "trtype": "$TEST_TRANSPORT", 00:27:06.269 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:06.269 "adrfam": "ipv4", 00:27:06.269 "trsvcid": "$NVMF_PORT", 00:27:06.269 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:06.269 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:06.269 "hdgst": ${hdgst:-false}, 00:27:06.269 "ddgst": ${ddgst:-false} 00:27:06.269 }, 00:27:06.269 "method": "bdev_nvme_attach_controller" 00:27:06.269 } 00:27:06.269 EOF 00:27:06.269 )") 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:06.269 09:12:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:06.269 "params": { 00:27:06.270 "name": "Nvme1", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme2", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme3", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme4", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme5", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme6", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme7", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme8", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme9", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 },{ 00:27:06.270 "params": { 00:27:06.270 "name": "Nvme10", 00:27:06.270 "trtype": "tcp", 00:27:06.270 "traddr": "10.0.0.2", 00:27:06.270 "adrfam": "ipv4", 00:27:06.270 "trsvcid": "4420", 00:27:06.270 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:06.270 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:06.270 "hdgst": false, 00:27:06.270 "ddgst": false 00:27:06.270 }, 00:27:06.270 "method": "bdev_nvme_attach_controller" 00:27:06.270 }' 00:27:06.270 [2024-07-24 09:12:44.147370] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:06.270 [2024-07-24 09:12:44.147464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3852408 ] 00:27:06.270 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.270 [2024-07-24 09:12:44.183141] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:06.270 [2024-07-24 09:12:44.212587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:06.270 [2024-07-24 09:12:44.298148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.168 Running I/O for 10 seconds... 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:08.168 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3852408 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3852408 ']' 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3852408 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852408 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852408' 00:27:08.427 killing process with pid 3852408 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3852408 00:27:08.427 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3852408 00:27:08.685 Received shutdown signal, test time was about 0.779889 seconds 00:27:08.685 00:27:08.685 Latency(us) 00:27:08.685 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:08.685 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme1n1 : 0.77 247.97 15.50 0.00 0.00 254247.82 34564.17 259425.47 00:27:08.685 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme2n1 : 0.77 250.18 15.64 0.00 0.00 245617.40 27185.30 222142.77 00:27:08.685 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme3n1 : 0.76 254.04 15.88 0.00 0.00 235859.82 18350.08 256318.58 00:27:08.685 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme4n1 : 0.76 251.47 15.72 0.00 0.00 232488.96 22136.60 248551.35 00:27:08.685 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme5n1 : 0.76 252.78 15.80 0.00 0.00 224761.68 20194.80 240784.12 00:27:08.685 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme6n1 : 0.78 246.45 15.40 0.00 0.00 225678.98 18932.62 245444.46 00:27:08.685 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.685 Nvme7n1 : 0.77 248.25 15.52 0.00 0.00 217379.27 17670.45 257872.02 00:27:08.685 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.685 Verification LBA range: start 0x0 length 0x400 00:27:08.686 Nvme8n1 : 0.74 186.71 11.67 0.00 0.00 272682.00 9903.22 257872.02 00:27:08.686 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.686 Verification LBA range: start 0x0 length 0x400 00:27:08.686 Nvme9n1 : 0.73 174.41 10.90 0.00 0.00 289428.29 20583.16 273406.48 00:27:08.686 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:08.686 Verification LBA range: start 0x0 length 0x400 00:27:08.686 Nvme10n1 : 0.75 171.49 10.72 0.00 0.00 286355.91 17670.45 287387.50 00:27:08.686 =================================================================================================================== 00:27:08.686 Total : 2283.75 142.73 0.00 0.00 244792.29 9903.22 287387.50 00:27:08.943 09:12:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3852229 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:09.877 rmmod nvme_tcp 00:27:09.877 rmmod nvme_fabrics 00:27:09.877 rmmod nvme_keyring 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3852229 ']' 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3852229 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3852229 ']' 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3852229 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3852229 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3852229' 00:27:09.877 killing process with pid 3852229 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3852229 00:27:09.877 09:12:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3852229 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:10.443 09:12:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:12.346 00:27:12.346 real 0m7.416s 00:27:12.346 user 0m22.059s 00:27:12.346 sys 0m1.472s 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:12.346 ************************************ 00:27:12.346 END TEST nvmf_shutdown_tc2 00:27:12.346 ************************************ 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.346 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:12.605 ************************************ 00:27:12.605 START TEST nvmf_shutdown_tc3 00:27:12.605 ************************************ 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:12.605 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:12.606 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:12.606 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:12.606 Found net devices under 0000:09:00.0: cvl_0_0 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:12.606 Found net devices under 0000:09:00.1: cvl_0_1 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:12.606 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:12.606 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:27:12.606 00:27:12.606 --- 10.0.0.2 ping statistics --- 00:27:12.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.606 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:12.606 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:12.606 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:27:12.606 00:27:12.606 --- 10.0.0.1 ping statistics --- 00:27:12.606 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:12.606 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:12.606 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3853318 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3853318 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3853318 ']' 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:12.607 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:12.607 [2024-07-24 09:12:50.698181] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:12.607 [2024-07-24 09:12:50.698256] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:12.865 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.865 [2024-07-24 09:12:50.741167] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:12.865 [2024-07-24 09:12:50.769694] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:12.865 [2024-07-24 09:12:50.863572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:12.865 [2024-07-24 09:12:50.863626] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:12.865 [2024-07-24 09:12:50.863656] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:12.865 [2024-07-24 09:12:50.863667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:12.865 [2024-07-24 09:12:50.863677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:12.865 [2024-07-24 09:12:50.863735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:12.865 [2024-07-24 09:12:50.863796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:12.865 [2024-07-24 09:12:50.863865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.865 [2024-07-24 09:12:50.863862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:13.124 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.124 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:13.124 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:13.124 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.124 09:12:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.124 [2024-07-24 09:12:51.029685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:13.124 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.124 Malloc1 00:27:13.124 [2024-07-24 09:12:51.119691] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:13.124 Malloc2 00:27:13.124 Malloc3 00:27:13.382 Malloc4 00:27:13.383 Malloc5 00:27:13.383 Malloc6 00:27:13.383 Malloc7 00:27:13.383 Malloc8 00:27:13.383 Malloc9 00:27:13.641 Malloc10 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3853491 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3853491 /var/tmp/bdevperf.sock 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3853491 ']' 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.641 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:13.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:13.642 { 00:27:13.642 "params": { 00:27:13.642 "name": "Nvme$subsystem", 00:27:13.642 "trtype": "$TEST_TRANSPORT", 00:27:13.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:13.642 "adrfam": "ipv4", 00:27:13.642 "trsvcid": "$NVMF_PORT", 00:27:13.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:13.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:13.642 "hdgst": ${hdgst:-false}, 00:27:13.642 "ddgst": ${ddgst:-false} 00:27:13.642 }, 00:27:13.642 "method": "bdev_nvme_attach_controller" 00:27:13.642 } 00:27:13.642 EOF 00:27:13.642 )") 00:27:13.642 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:13.643 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:13.643 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:13.643 09:12:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme1", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme2", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme3", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme4", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme5", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme6", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme7", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme8", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme9", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 },{ 00:27:13.643 "params": { 00:27:13.643 "name": "Nvme10", 00:27:13.643 "trtype": "tcp", 00:27:13.643 "traddr": "10.0.0.2", 00:27:13.643 "adrfam": "ipv4", 00:27:13.643 "trsvcid": "4420", 00:27:13.643 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:13.643 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:13.643 "hdgst": false, 00:27:13.643 "ddgst": false 00:27:13.643 }, 00:27:13.643 "method": "bdev_nvme_attach_controller" 00:27:13.643 }' 00:27:13.643 [2024-07-24 09:12:51.640623] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:13.643 [2024-07-24 09:12:51.640699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3853491 ] 00:27:13.643 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.643 [2024-07-24 09:12:51.675784] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:13.643 [2024-07-24 09:12:51.704795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.901 [2024-07-24 09:12:51.790925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.798 Running I/O for 10 seconds... 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:15.798 09:12:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:16.057 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3853318 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3853318 ']' 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3853318 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3853318 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3853318' 00:27:16.319 killing process with pid 3853318 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3853318 00:27:16.319 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3853318 00:27:16.319 [2024-07-24 09:12:54.417415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417765] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.319 [2024-07-24 09:12:54.417893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.417998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.418312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9af0 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419846] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419930] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419965] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.419999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.320 [2024-07-24 09:12:54.420223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.420410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc610 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 09:12:54.421803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with tid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 he state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.421829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with t[2024-07-24 09:12:54.421844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nshe state(5) to be set 00:27:16.321 id:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.421861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with t[2024-07-24 09:12:54.421861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:27:16.321 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.421876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.421889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.421901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.421913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.421926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae920 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.421987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422004] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422069] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422081] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.422134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 09:12:54.422147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 he state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.422176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.422191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.422204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.422161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.321 [2024-07-24 09:12:54.422225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.321 [2024-07-24 09:12:54.422240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009f10 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422376] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.321 [2024-07-24 09:12:54.422433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422565] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.422623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcf9fb0 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.425407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.425971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.425986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.426180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:16.322 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.426212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.426226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 [2024-07-24 09:12:54.426233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.426242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.322 [2024-07-24 09:12:54.426245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.426257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.426258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.322 he state(5) to be set 00:27:16.322 [2024-07-24 09:12:54.426276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.426324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:1he state(5) to be set 00:27:16.323 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:16.323 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.426522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1[2024-07-24 09:12:54.426571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:16.323 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-07-24 09:12:54.426636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.426652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:1[2024-07-24 09:12:54.426702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:16.323 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 [2024-07-24 09:12:54.426753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.426779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.323 he state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.323 [2024-07-24 09:12:54.426804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.323 [2024-07-24 09:12:54.426809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.426816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1[2024-07-24 09:12:54.426828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 he state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.426845] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.426866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:16.324 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.426891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1he state(5) to be set 00:27:16.324 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.426907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.426908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:27:16.324 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.426921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.426933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.426946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.426958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.426970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 he state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.426985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.426995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.427000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with t[2024-07-24 09:12:54.427015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1he state(5) to be set 00:27:16.324 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa950 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.427053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.324 [2024-07-24 09:12:54.427442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.324 [2024-07-24 09:12:54.427480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:16.324 [2024-07-24 09:12:54.427554] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2198de0 was disconnected and freed. reset controller. 00:27:16.324 [2024-07-24 09:12:54.429461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.324 [2024-07-24 09:12:54.429626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.429992] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with t[2024-07-24 09:12:54.430068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:16.325 [2024-07-24 09:12:54.430153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036300 (9): Bad file descriptor 00:27:16.325 he state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430225] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.325 [2024-07-24 09:12:54.430247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430426] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430686] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.325 [2024-07-24 09:12:54.430598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.325 [2024-07-24 09:12:54.430814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfae10 is same with the state(5) to be set 00:27:16.594 [2024-07-24 09:12:54.431356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.431976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.431995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.432009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.432024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.432038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.432054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.432067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.594 [2024-07-24 09:12:54.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.594 [2024-07-24 09:12:54.432097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.432985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.432999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.595 [2024-07-24 09:12:54.433268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.595 [2024-07-24 09:12:54.433282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.433298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-24 09:12:54.433311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.433326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-24 09:12:54.433340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.433356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.596 [2024-07-24 09:12:54.433370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.433384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218fff0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.433882] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218fff0 was disconnected and freed. reset controller. 00:27:16.596 [2024-07-24 09:12:54.434067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-07-24 09:12:54.434098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036300 with addr=10.0.0.2, port=4420 00:27:16.596 [2024-07-24 09:12:54.434123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036300 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.434163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae920 (9): Bad file descriptor 00:27:16.596 [2024-07-24 09:12:54.434245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202cce0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.434420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d5ad0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.434576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.596 [2024-07-24 09:12:54.434685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.596 [2024-07-24 09:12:54.434697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3070 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.434724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009f10 (9): Bad file descriptor 00:27:16.596 [2024-07-24 09:12:54.434862] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.596 [2024-07-24 09:12:54.436164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb2f0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.436193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb2f0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.436372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:16.596 [2024-07-24 09:12:54.436416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036300 (9): Bad file descriptor 00:27:16.596 [2024-07-24 09:12:54.437198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.596 [2024-07-24 09:12:54.437546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with t[2024-07-24 09:12:54.437558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ae920 withe state(5) to be set 00:27:16.596 h addr=10.0.0.2, port=4420 00:27:16.596 [2024-07-24 09:12:54.437574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae920 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:16.596 [2024-07-24 09:12:54.437598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:16.596 [2024-07-24 09:12:54.437611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with t[2024-07-24 09:12:54.437624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:16.596 he state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.596 [2024-07-24 09:12:54.437661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437709] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-24 09:12:54.437780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437807] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-24 09:12:54.437844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437891] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-24 09:12:54.437903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.437915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 he state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-24 09:12:54.437942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 [2024-07-24 09:12:54.437954] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.597 [2024-07-24 09:12:54.437967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 09:12:54.437980] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.597 he state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.437994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2005c00 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.438005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.438017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.438029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfb7b0 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.438066] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2005c00 was disconnected and freed. reset controller. 00:27:16.597 [2024-07-24 09:12:54.438529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.597 [2024-07-24 09:12:54.438564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae920 (9): Bad file descriptor 00:27:16.597 [2024-07-24 09:12:54.439110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439373] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.597 [2024-07-24 09:12:54.439619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with t[2024-07-24 09:12:54.439711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controllehe state(5) to be set 00:27:16.598 r 00:27:16.598 [2024-07-24 09:12:54.439733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with t[2024-07-24 09:12:54.439774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff610 (9): he state(5) to be set 00:27:16.598 Bad file descriptor 00:27:16.598 [2024-07-24 09:12:54.439792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:16.598 [2024-07-24 09:12:54.439804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:16.598 [2024-07-24 09:12:54.439816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:16.598 [2024-07-24 09:12:54.439829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439876] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.439888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbc70 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.598 [2024-07-24 09:12:54.440199] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.598 [2024-07-24 09:12:54.440615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.598 [2024-07-24 09:12:54.440875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff610 wit[2024-07-24 09:12:54.440890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with th addr=10.0.0.2, port=4420 00:27:16.598 he state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff610 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440917] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.440993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441150] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff610 (9): Bad file descriptor 00:27:16.598 [2024-07-24 09:12:54.441260] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.598 [2024-07-24 09:12:54.441308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441412] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:16.598 [2024-07-24 09:12:54.441433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:16.598 [2024-07-24 09:12:54.441447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:16.598 [2024-07-24 09:12:54.441499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:16.598 [2024-07-24 09:12:54.441640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.598 [2024-07-24 09:12:54.441686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441725] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:16.598 [2024-07-24 09:12:54.441735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.598 [2024-07-24 09:12:54.441893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.441915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.441936] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.441957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.441958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.599 [2024-07-24 09:12:54.441979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.441986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036300 with addr=10.0.0.2, port=4420 00:27:16.599 [2024-07-24 09:12:54.442003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036300 is same [2024-07-24 09:12:54.442001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with twith the state(5) to be set 00:27:16.599 he state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442059] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036300 (9): [2024-07-24 09:12:54.442144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with tBad file descriptor 00:27:16.599 he state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:16.599 [2024-07-24 09:12:54.442258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:16.599 [2024-07-24 09:12:54.442270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:16.599 [2024-07-24 09:12:54.442282] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442293] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfc130 is same with the state(5) to be set 00:27:16.599 [2024-07-24 09:12:54.442377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.599 [2024-07-24 09:12:54.442689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.442976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.442990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.599 [2024-07-24 09:12:54.443295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.599 [2024-07-24 09:12:54.443311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.443974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.443993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.600 [2024-07-24 09:12:54.444488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.600 [2024-07-24 09:12:54.444502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.444531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.444560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.444589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.444618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x218eb10 is same with the state(5) to be set 00:27:16.601 [2024-07-24 09:12:54.444703] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x218eb10 was disconnected and freed. reset controller. 00:27:16.601 [2024-07-24 09:12:54.444763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.444788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.444836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.444876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.444908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163180 is same with the state(5) to be set 00:27:16.601 [2024-07-24 09:12:54.444962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.444982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.444996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d3ab0 is same with the state(5) to be set 00:27:16.601 [2024-07-24 09:12:54.445131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.601 [2024-07-24 09:12:54.445232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.445245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3cc0 is same with the state(5) to be set 00:27:16.601 [2024-07-24 09:12:54.445271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202cce0 (9): Bad file descriptor 00:27:16.601 [2024-07-24 09:12:54.445301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d5ad0 (9): Bad file descriptor 00:27:16.601 [2024-07-24 09:12:54.445329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c3070 (9): Bad file descriptor 00:27:16.601 [2024-07-24 09:12:54.446562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:16.601 [2024-07-24 09:12:54.446594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2163180 (9): Bad file descriptor 00:27:16.601 [2024-07-24 09:12:54.446655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.446972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.446986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.601 [2024-07-24 09:12:54.447224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.601 [2024-07-24 09:12:54.447238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.447982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.447997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.602 [2024-07-24 09:12:54.448419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.602 [2024-07-24 09:12:54.448435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.448448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.448471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.448485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.448500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.448514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.448530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.448543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.448559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.448572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.448592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.448607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.448621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21966a0 is same with the state(5) to be set 00:27:16.603 [2024-07-24 09:12:54.449881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.603 [2024-07-24 09:12:54.450271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:16.603 [2024-07-24 09:12:54.450469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.603 [2024-07-24 09:12:54.450497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2163180 with addr=10.0.0.2, port=4420 00:27:16.603 [2024-07-24 09:12:54.450514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163180 is same with the state(5) to be set 00:27:16.603 [2024-07-24 09:12:54.450656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.603 [2024-07-24 09:12:54.450680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009f10 with addr=10.0.0.2, port=4420 00:27:16.603 [2024-07-24 09:12:54.450695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009f10 is same with the state(5) to be set 00:27:16.603 [2024-07-24 09:12:54.451208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.603 [2024-07-24 09:12:54.451235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ae920 with addr=10.0.0.2, port=4420 00:27:16.603 [2024-07-24 09:12:54.451251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae920 is same with the state(5) to be set 00:27:16.603 [2024-07-24 09:12:54.451271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2163180 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.451290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009f10 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.451361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:16.603 [2024-07-24 09:12:54.451405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae920 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.451426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:16.603 [2024-07-24 09:12:54.451439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:16.603 [2024-07-24 09:12:54.451454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:16.603 [2024-07-24 09:12:54.451473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.603 [2024-07-24 09:12:54.451487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.603 [2024-07-24 09:12:54.451499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.603 [2024-07-24 09:12:54.451559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.603 [2024-07-24 09:12:54.451580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.603 [2024-07-24 09:12:54.451732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.603 [2024-07-24 09:12:54.451757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff610 with addr=10.0.0.2, port=4420 00:27:16.603 [2024-07-24 09:12:54.451773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff610 is same with the state(5) to be set 00:27:16.603 [2024-07-24 09:12:54.451797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:16.603 [2024-07-24 09:12:54.451810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:16.603 [2024-07-24 09:12:54.451823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:16.603 [2024-07-24 09:12:54.451874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.603 [2024-07-24 09:12:54.451896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff610 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.451956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:16.603 [2024-07-24 09:12:54.451975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:16.603 [2024-07-24 09:12:54.451988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:16.603 [2024-07-24 09:12:54.452036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:16.603 [2024-07-24 09:12:54.452057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.603 [2024-07-24 09:12:54.452212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.603 [2024-07-24 09:12:54.452240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036300 with addr=10.0.0.2, port=4420 00:27:16.603 [2024-07-24 09:12:54.452255] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036300 is same with the state(5) to be set 00:27:16.603 [2024-07-24 09:12:54.452306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036300 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.452354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:16.603 [2024-07-24 09:12:54.452370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:16.603 [2024-07-24 09:12:54.452392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:16.603 [2024-07-24 09:12:54.452440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.603 [2024-07-24 09:12:54.454762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d3ab0 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.454800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a3cc0 (9): Bad file descriptor 00:27:16.603 [2024-07-24 09:12:54.454942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.454967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.454995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.455027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.455057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.455088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.455135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.455165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.603 [2024-07-24 09:12:54.455194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.603 [2024-07-24 09:12:54.455208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.455971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.455985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.604 [2024-07-24 09:12:54.456387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.604 [2024-07-24 09:12:54.456403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.456891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.456905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2197950 is same with the state(5) to be set 00:27:16.605 [2024-07-24 09:12:54.458180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.605 [2024-07-24 09:12:54.458844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.605 [2024-07-24 09:12:54.458859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.458874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.458890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.458904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.458920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.458933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.458950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.458964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.458980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.458993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.459982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.459996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.460011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.460025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.606 [2024-07-24 09:12:54.460040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.606 [2024-07-24 09:12:54.460054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.460070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.460083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.460099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.460123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.460138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2003430 is same with the state(5) to be set 00:27:16.607 [2024-07-24 09:12:54.461354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.461973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.461987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.607 [2024-07-24 09:12:54.462341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.607 [2024-07-24 09:12:54.462355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.462981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.462997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.608 [2024-07-24 09:12:54.463287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.608 [2024-07-24 09:12:54.463302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2004770 is same with the state(5) to be set 00:27:16.608 [2024-07-24 09:12:54.464530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:16.608 [2024-07-24 09:12:54.464561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:16.608 [2024-07-24 09:12:54.464579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:16.608 [2024-07-24 09:12:54.465020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.608 [2024-07-24 09:12:54.465051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d5ad0 with addr=10.0.0.2, port=4420 00:27:16.608 [2024-07-24 09:12:54.465067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d5ad0 is same with the state(5) to be set 00:27:16.608 [2024-07-24 09:12:54.465200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.608 [2024-07-24 09:12:54.465226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202cce0 with addr=10.0.0.2, port=4420 00:27:16.608 [2024-07-24 09:12:54.465241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202cce0 is same with the state(5) to be set 00:27:16.608 [2024-07-24 09:12:54.465378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.608 [2024-07-24 09:12:54.465403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c3070 with addr=10.0.0.2, port=4420 00:27:16.608 [2024-07-24 09:12:54.465418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3070 is same with the state(5) to be set 00:27:16.608 [2024-07-24 09:12:54.466242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.608 [2024-07-24 09:12:54.466269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:16.609 [2024-07-24 09:12:54.466285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:16.609 [2024-07-24 09:12:54.466301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:16.609 [2024-07-24 09:12:54.466317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:16.609 [2024-07-24 09:12:54.466382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d5ad0 (9): Bad file descriptor 00:27:16.609 [2024-07-24 09:12:54.466407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202cce0 (9): Bad file descriptor 00:27:16.609 [2024-07-24 09:12:54.466425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c3070 (9): Bad file descriptor 00:27:16.609 [2024-07-24 09:12:54.466511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.466976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.466993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.609 [2024-07-24 09:12:54.467617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.609 [2024-07-24 09:12:54.467631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.467978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.467992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.468450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.468465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29373d0 is same with the state(5) to be set 00:27:16.610 [2024-07-24 09:12:54.469704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.469980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.469996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.470010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.470025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.470039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.610 [2024-07-24 09:12:54.470055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.610 [2024-07-24 09:12:54.470069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.470979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.470995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.611 [2024-07-24 09:12:54.471253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.611 [2024-07-24 09:12:54.471268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:16.612 [2024-07-24 09:12:54.471642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.612 [2024-07-24 09:12:54.471659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2adee10 is same with the state(5) to be set 00:27:16.612 [2024-07-24 09:12:54.473235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:16.612 task offset: 24576 on job bdev=Nvme3n1 fails 00:27:16.612 00:27:16.612 Latency(us) 00:27:16.612 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.612 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme1n1 ended in about 0.91 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme1n1 : 0.91 139.98 8.75 69.99 0.00 301469.14 39030.33 262532.36 00:27:16.612 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme2n1 ended in about 0.92 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme2n1 : 0.92 138.72 8.67 69.36 0.00 298096.77 22427.88 274959.93 00:27:16.612 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme3n1 ended in about 0.89 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme3n1 : 0.89 214.72 13.42 71.57 0.00 211825.83 5825.42 270299.59 00:27:16.612 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme4n1 ended in about 0.93 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme4n1 : 0.93 138.24 8.64 69.12 0.00 286947.81 20680.25 248551.35 00:27:16.612 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme5n1 ended in about 0.93 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme5n1 : 0.93 137.77 8.61 68.88 0.00 281909.98 23301.69 267192.70 00:27:16.612 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme6n1 ended in about 0.90 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme6n1 : 0.90 212.34 13.27 4.42 0.00 261470.50 1990.35 265639.25 00:27:16.612 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme7n1 ended in about 0.93 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme7n1 : 0.93 205.51 12.84 68.50 0.00 203509.00 14175.19 273406.48 00:27:16.612 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme8n1 ended in about 0.94 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme8n1 : 0.94 140.81 8.80 68.27 0.00 261183.99 21359.88 250104.79 00:27:16.612 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme9n1 ended in about 0.91 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme9n1 : 0.91 140.47 8.78 70.23 0.00 252045.08 14369.37 295154.73 00:27:16.612 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:16.612 Job: Nvme10n1 ended in about 0.90 seconds with error 00:27:16.612 Verification LBA range: start 0x0 length 0x400 00:27:16.612 Nvme10n1 : 0.90 142.06 8.88 71.03 0.00 242722.39 13786.83 313796.08 00:27:16.612 =================================================================================================================== 00:27:16.612 Total : 1610.60 100.66 631.39 0.00 256857.34 1990.35 313796.08 00:27:16.612 [2024-07-24 09:12:54.499623] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:16.612 [2024-07-24 09:12:54.499703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:16.612 [2024-07-24 09:12:54.499997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.612 [2024-07-24 09:12:54.500032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2009f10 with addr=10.0.0.2, port=4420 00:27:16.612 [2024-07-24 09:12:54.500068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009f10 is same with the state(5) to be set 00:27:16.612 [2024-07-24 09:12:54.500199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.612 [2024-07-24 09:12:54.500226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2163180 with addr=10.0.0.2, port=4420 00:27:16.612 [2024-07-24 09:12:54.500242] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2163180 is same with the state(5) to be set 00:27:16.612 [2024-07-24 09:12:54.500356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.612 [2024-07-24 09:12:54.500382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ae920 with addr=10.0.0.2, port=4420 00:27:16.612 [2024-07-24 09:12:54.500398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae920 is same with the state(5) to be set 00:27:16.612 [2024-07-24 09:12:54.500511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.612 [2024-07-24 09:12:54.500537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aff610 with addr=10.0.0.2, port=4420 00:27:16.612 [2024-07-24 09:12:54.500553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aff610 is same with the state(5) to be set 00:27:16.612 [2024-07-24 09:12:54.500666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.612 [2024-07-24 09:12:54.500691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2036300 with addr=10.0.0.2, port=4420 00:27:16.612 [2024-07-24 09:12:54.500707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2036300 is same with the state(5) to be set 00:27:16.612 [2024-07-24 09:12:54.500722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.500735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.500752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:16.613 [2024-07-24 09:12:54.500778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.500792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.500805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:16.613 [2024-07-24 09:12:54.500824] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.500837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.500850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:16.613 [2024-07-24 09:12:54.501295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.501322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.501336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.501471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.613 [2024-07-24 09:12:54.501498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21a3cc0 with addr=10.0.0.2, port=4420 00:27:16.613 [2024-07-24 09:12:54.501514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21a3cc0 is same with the state(5) to be set 00:27:16.613 [2024-07-24 09:12:54.501622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.613 [2024-07-24 09:12:54.501648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d3ab0 with addr=10.0.0.2, port=4420 00:27:16.613 [2024-07-24 09:12:54.501669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d3ab0 is same with the state(5) to be set 00:27:16.613 [2024-07-24 09:12:54.501694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2009f10 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.501716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2163180 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.501735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae920 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.501753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aff610 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.501770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2036300 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.501821] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.613 [2024-07-24 09:12:54.501843] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.613 [2024-07-24 09:12:54.501862] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.613 [2024-07-24 09:12:54.501881] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.613 [2024-07-24 09:12:54.501899] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:16.613 [2024-07-24 09:12:54.502512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a3cc0 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.502542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d3ab0 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.502560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.502574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.502587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.613 [2024-07-24 09:12:54.502605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.502619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.502632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:16.613 [2024-07-24 09:12:54.502649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.502662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.502675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:16.613 [2024-07-24 09:12:54.502691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.502704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.502717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:16.613 [2024-07-24 09:12:54.502733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.502746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.502759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:16.613 [2024-07-24 09:12:54.502825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:16.613 [2024-07-24 09:12:54.502855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:16.613 [2024-07-24 09:12:54.502871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:16.613 [2024-07-24 09:12:54.502887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.502899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.502910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.502921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.502953] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.502969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.502982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:16.613 [2024-07-24 09:12:54.502998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.503012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.503025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:16.613 [2024-07-24 09:12:54.503071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.503099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.503123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.503232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.613 [2024-07-24 09:12:54.503258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21c3070 with addr=10.0.0.2, port=4420 00:27:16.613 [2024-07-24 09:12:54.503274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3070 is same with the state(5) to be set 00:27:16.613 [2024-07-24 09:12:54.503380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.613 [2024-07-24 09:12:54.503406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x202cce0 with addr=10.0.0.2, port=4420 00:27:16.613 [2024-07-24 09:12:54.503421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202cce0 is same with the state(5) to be set 00:27:16.613 [2024-07-24 09:12:54.503526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:16.613 [2024-07-24 09:12:54.503551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21d5ad0 with addr=10.0.0.2, port=4420 00:27:16.613 [2024-07-24 09:12:54.503566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21d5ad0 is same with the state(5) to be set 00:27:16.613 [2024-07-24 09:12:54.503609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21c3070 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.503634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202cce0 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.503652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21d5ad0 (9): Bad file descriptor 00:27:16.613 [2024-07-24 09:12:54.503693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.503711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.503724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:16.613 [2024-07-24 09:12:54.503741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.503760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.503773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:16.613 [2024-07-24 09:12:54.503789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:16.613 [2024-07-24 09:12:54.503802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:16.613 [2024-07-24 09:12:54.503814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:16.613 [2024-07-24 09:12:54.503852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.503868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.613 [2024-07-24 09:12:54.503880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:16.873 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:16.873 09:12:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3853491 00:27:18.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3853491) - No such process 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.250 09:12:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.250 rmmod nvme_tcp 00:27:18.250 rmmod nvme_fabrics 00:27:18.250 rmmod nvme_keyring 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:18.250 09:12:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.225 00:27:20.225 real 0m7.610s 00:27:20.225 user 0m18.620s 00:27:20.225 sys 0m1.576s 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 ************************************ 00:27:20.225 END TEST nvmf_shutdown_tc3 00:27:20.225 ************************************ 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:20.225 00:27:20.225 real 0m27.101s 00:27:20.225 user 1m15.423s 00:27:20.225 sys 0m6.498s 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 ************************************ 00:27:20.225 END TEST nvmf_shutdown 00:27:20.225 ************************************ 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:27:20.225 00:27:20.225 real 16m43.851s 00:27:20.225 user 46m56.183s 00:27:20.225 sys 3m52.102s 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.225 09:12:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 ************************************ 00:27:20.225 END TEST nvmf_target_extra 00:27:20.225 ************************************ 00:27:20.225 09:12:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:20.225 09:12:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.225 09:12:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.225 09:12:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 ************************************ 00:27:20.225 START TEST nvmf_host 00:27:20.225 ************************************ 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:27:20.225 * Looking for test storage... 00:27:20.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.225 ************************************ 00:27:20.225 START TEST nvmf_multicontroller 00:27:20.225 ************************************ 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:20.225 * Looking for test storage... 00:27:20.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:20.225 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.226 09:12:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:22.127 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:22.127 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:22.127 Found net devices under 0000:09:00.0: cvl_0_0 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:22.127 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:22.128 Found net devices under 0000:09:00.1: cvl_0_1 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.128 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.386 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:22.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:27:22.386 00:27:22.386 --- 10.0.0.2 ping statistics --- 00:27:22.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.386 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:27:22.387 00:27:22.387 --- 10.0.0.1 ping statistics --- 00:27:22.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.387 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3855923 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3855923 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3855923 ']' 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.387 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.387 [2024-07-24 09:13:00.384592] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:22.387 [2024-07-24 09:13:00.384668] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.387 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.387 [2024-07-24 09:13:00.424280] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:22.387 [2024-07-24 09:13:00.454354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:22.645 [2024-07-24 09:13:00.543812] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.645 [2024-07-24 09:13:00.543884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.645 [2024-07-24 09:13:00.543898] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.645 [2024-07-24 09:13:00.543908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.645 [2024-07-24 09:13:00.543917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.645 [2024-07-24 09:13:00.544003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.646 [2024-07-24 09:13:00.544067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.646 [2024-07-24 09:13:00.544070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 [2024-07-24 09:13:00.690678] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 Malloc0 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 [2024-07-24 09:13:00.751319] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.646 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.646 [2024-07-24 09:13:00.759205] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.904 Malloc1 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3856066 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3856066 /var/tmp/bdevperf.sock 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3856066 ']' 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:22.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:22.904 09:13:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.162 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:23.162 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:23.162 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:23.162 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.162 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.421 NVMe0n1 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.421 1 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.421 request: 00:27:23.421 { 00:27:23.421 "name": "NVMe0", 00:27:23.421 "trtype": "tcp", 00:27:23.421 "traddr": "10.0.0.2", 00:27:23.421 "adrfam": "ipv4", 00:27:23.421 "trsvcid": "4420", 00:27:23.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.421 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:23.421 "hostaddr": "10.0.0.2", 00:27:23.421 "hostsvcid": "60000", 00:27:23.421 "prchk_reftag": false, 00:27:23.421 "prchk_guard": false, 00:27:23.421 "hdgst": false, 00:27:23.421 "ddgst": false, 00:27:23.421 "method": "bdev_nvme_attach_controller", 00:27:23.421 "req_id": 1 00:27:23.421 } 00:27:23.421 Got JSON-RPC error response 00:27:23.421 response: 00:27:23.421 { 00:27:23.421 "code": -114, 00:27:23.421 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:23.421 } 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.421 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.421 request: 00:27:23.421 { 00:27:23.421 "name": "NVMe0", 00:27:23.421 "trtype": "tcp", 00:27:23.421 "traddr": "10.0.0.2", 00:27:23.421 "adrfam": "ipv4", 00:27:23.421 "trsvcid": "4420", 00:27:23.421 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:23.421 "hostaddr": "10.0.0.2", 00:27:23.421 "hostsvcid": "60000", 00:27:23.421 "prchk_reftag": false, 00:27:23.421 "prchk_guard": false, 00:27:23.421 "hdgst": false, 00:27:23.421 "ddgst": false, 00:27:23.421 "method": "bdev_nvme_attach_controller", 00:27:23.421 "req_id": 1 00:27:23.421 } 00:27:23.421 Got JSON-RPC error response 00:27:23.422 response: 00:27:23.422 { 00:27:23.422 "code": -114, 00:27:23.422 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:23.422 } 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.422 request: 00:27:23.422 { 00:27:23.422 "name": "NVMe0", 00:27:23.422 "trtype": "tcp", 00:27:23.422 "traddr": "10.0.0.2", 00:27:23.422 "adrfam": "ipv4", 00:27:23.422 "trsvcid": "4420", 00:27:23.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.422 "hostaddr": "10.0.0.2", 00:27:23.422 "hostsvcid": "60000", 00:27:23.422 "prchk_reftag": false, 00:27:23.422 "prchk_guard": false, 00:27:23.422 "hdgst": false, 00:27:23.422 "ddgst": false, 00:27:23.422 "multipath": "disable", 00:27:23.422 "method": "bdev_nvme_attach_controller", 00:27:23.422 "req_id": 1 00:27:23.422 } 00:27:23.422 Got JSON-RPC error response 00:27:23.422 response: 00:27:23.422 { 00:27:23.422 "code": -114, 00:27:23.422 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:23.422 } 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.422 request: 00:27:23.422 { 00:27:23.422 "name": "NVMe0", 00:27:23.422 "trtype": "tcp", 00:27:23.422 "traddr": "10.0.0.2", 00:27:23.422 "adrfam": "ipv4", 00:27:23.422 "trsvcid": "4420", 00:27:23.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:23.422 "hostaddr": "10.0.0.2", 00:27:23.422 "hostsvcid": "60000", 00:27:23.422 "prchk_reftag": false, 00:27:23.422 "prchk_guard": false, 00:27:23.422 "hdgst": false, 00:27:23.422 "ddgst": false, 00:27:23.422 "multipath": "failover", 00:27:23.422 "method": "bdev_nvme_attach_controller", 00:27:23.422 "req_id": 1 00:27:23.422 } 00:27:23.422 Got JSON-RPC error response 00:27:23.422 response: 00:27:23.422 { 00:27:23.422 "code": -114, 00:27:23.422 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:23.422 } 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.422 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.680 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.680 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:23.680 09:13:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:25.052 0 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3856066 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3856066 ']' 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3856066 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3856066 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3856066' 00:27:25.052 killing process with pid 3856066 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3856066 00:27:25.052 09:13:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3856066 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:25.052 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:27:25.323 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:25.323 [2024-07-24 09:13:00.864399] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:25.323 [2024-07-24 09:13:00.864489] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3856066 ] 00:27:25.323 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.323 [2024-07-24 09:13:00.896787] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:25.323 [2024-07-24 09:13:00.925557] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.323 [2024-07-24 09:13:01.012454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:25.323 [2024-07-24 09:13:01.734426] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name a5fc674e-d8ff-4da1-9da5-ca1ac44d0291 already exists 00:27:25.323 [2024-07-24 09:13:01.734465] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:a5fc674e-d8ff-4da1-9da5-ca1ac44d0291 alias for bdev NVMe1n1 00:27:25.323 [2024-07-24 09:13:01.734480] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:25.323 Running I/O for 1 seconds... 00:27:25.323 00:27:25.323 Latency(us) 00:27:25.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.323 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:25.323 NVMe0n1 : 1.01 18042.09 70.48 0.00 0.00 7083.12 2002.49 12621.75 00:27:25.323 =================================================================================================================== 00:27:25.323 Total : 18042.09 70.48 0.00 0.00 7083.12 2002.49 12621.75 00:27:25.323 Received shutdown signal, test time was about 1.000000 seconds 00:27:25.323 00:27:25.323 Latency(us) 00:27:25.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.323 =================================================================================================================== 00:27:25.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:25.323 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.323 rmmod nvme_tcp 00:27:25.323 rmmod nvme_fabrics 00:27:25.323 rmmod nvme_keyring 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3855923 ']' 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3855923 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3855923 ']' 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3855923 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3855923 00:27:25.323 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:25.324 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:25.324 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3855923' 00:27:25.324 killing process with pid 3855923 00:27:25.324 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3855923 00:27:25.324 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3855923 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.587 09:13:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.488 09:13:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.488 00:27:27.488 real 0m7.324s 00:27:27.488 user 0m11.742s 00:27:27.488 sys 0m2.284s 00:27:27.488 09:13:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.488 09:13:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:27.488 ************************************ 00:27:27.488 END TEST nvmf_multicontroller 00:27:27.488 ************************************ 00:27:27.488 09:13:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:27.488 09:13:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:27.745 09:13:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.746 ************************************ 00:27:27.746 START TEST nvmf_aer 00:27:27.746 ************************************ 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:27.746 * Looking for test storage... 00:27:27.746 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.746 09:13:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:29.645 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:29.645 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:29.645 Found net devices under 0000:09:00.0: cvl_0_0 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:29.645 Found net devices under 0000:09:00.1: cvl_0_1 00:27:29.645 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:27:29.646 00:27:29.646 --- 10.0.0.2 ping statistics --- 00:27:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.646 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:27:29.646 00:27:29.646 --- 10.0.0.1 ping statistics --- 00:27:29.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.646 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3858280 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3858280 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3858280 ']' 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.646 09:13:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:29.911 [2024-07-24 09:13:07.774334] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:29.911 [2024-07-24 09:13:07.774436] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.911 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.912 [2024-07-24 09:13:07.811754] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:29.912 [2024-07-24 09:13:07.839570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:29.912 [2024-07-24 09:13:07.925837] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.912 [2024-07-24 09:13:07.925890] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.912 [2024-07-24 09:13:07.925904] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.912 [2024-07-24 09:13:07.925915] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.912 [2024-07-24 09:13:07.925925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.912 [2024-07-24 09:13:07.925983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.912 [2024-07-24 09:13:07.926038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.912 [2024-07-24 09:13:07.926127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.912 [2024-07-24 09:13:07.926130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 [2024-07-24 09:13:08.062250] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 Malloc0 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.169 [2024-07-24 09:13:08.112875] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.169 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.170 [ 00:27:30.170 { 00:27:30.170 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.170 "subtype": "Discovery", 00:27:30.170 "listen_addresses": [], 00:27:30.170 "allow_any_host": true, 00:27:30.170 "hosts": [] 00:27:30.170 }, 00:27:30.170 { 00:27:30.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.170 "subtype": "NVMe", 00:27:30.170 "listen_addresses": [ 00:27:30.170 { 00:27:30.170 "trtype": "TCP", 00:27:30.170 "adrfam": "IPv4", 00:27:30.170 "traddr": "10.0.0.2", 00:27:30.170 "trsvcid": "4420" 00:27:30.170 } 00:27:30.170 ], 00:27:30.170 "allow_any_host": true, 00:27:30.170 "hosts": [], 00:27:30.170 "serial_number": "SPDK00000000000001", 00:27:30.170 "model_number": "SPDK bdev Controller", 00:27:30.170 "max_namespaces": 2, 00:27:30.170 "min_cntlid": 1, 00:27:30.170 "max_cntlid": 65519, 00:27:30.170 "namespaces": [ 00:27:30.170 { 00:27:30.170 "nsid": 1, 00:27:30.170 "bdev_name": "Malloc0", 00:27:30.170 "name": "Malloc0", 00:27:30.170 "nguid": "9D372BC5863E421781D5775A9E7F13DB", 00:27:30.170 "uuid": "9d372bc5-863e-4217-81d5-775a9e7f13db" 00:27:30.170 } 00:27:30.170 ] 00:27:30.170 } 00:27:30.170 ] 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3858306 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:27:30.170 EAL: No free 2048 kB hugepages reported on node 1 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:27:30.170 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.429 Malloc1 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.429 [ 00:27:30.429 { 00:27:30.429 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:30.429 "subtype": "Discovery", 00:27:30.429 "listen_addresses": [], 00:27:30.429 "allow_any_host": true, 00:27:30.429 "hosts": [] 00:27:30.429 }, 00:27:30.429 { 00:27:30.429 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:30.429 "subtype": "NVMe", 00:27:30.429 "listen_addresses": [ 00:27:30.429 { 00:27:30.429 "trtype": "TCP", 00:27:30.429 "adrfam": "IPv4", 00:27:30.429 "traddr": "10.0.0.2", 00:27:30.429 "trsvcid": "4420" 00:27:30.429 } 00:27:30.429 ], 00:27:30.429 "allow_any_host": true, 00:27:30.429 "hosts": [], 00:27:30.429 "serial_number": "SPDK00000000000001", 00:27:30.429 "model_number": "SPDK bdev Controller", 00:27:30.429 "max_namespaces": 2, 00:27:30.429 "min_cntlid": 1, 00:27:30.429 "max_cntlid": 65519, 00:27:30.429 "namespaces": [ 00:27:30.429 { 00:27:30.429 "nsid": 1, 00:27:30.429 "bdev_name": "Malloc0", 00:27:30.429 "name": "Malloc0", 00:27:30.429 "nguid": "9D372BC5863E421781D5775A9E7F13DB", 00:27:30.429 "uuid": "9d372bc5-863e-4217-81d5-775a9e7f13db" 00:27:30.429 }, 00:27:30.429 { 00:27:30.429 "nsid": 2, 00:27:30.429 "bdev_name": "Malloc1", 00:27:30.429 "name": "Malloc1", 00:27:30.429 "nguid": "EAA79F1DCF0842A99434E4A0FE960CA2", 00:27:30.429 "uuid": "eaa79f1d-cf08-42a9-9434-e4a0fe960ca2" 00:27:30.429 } 00:27:30.429 ] 00:27:30.429 } 00:27:30.429 ] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3858306 00:27:30.429 Asynchronous Event Request test 00:27:30.429 Attaching to 10.0.0.2 00:27:30.429 Attached to 10.0.0.2 00:27:30.429 Registering asynchronous event callbacks... 00:27:30.429 Starting namespace attribute notice tests for all controllers... 00:27:30.429 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:30.429 aer_cb - Changed Namespace 00:27:30.429 Cleaning up... 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:30.429 rmmod nvme_tcp 00:27:30.429 rmmod nvme_fabrics 00:27:30.429 rmmod nvme_keyring 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3858280 ']' 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3858280 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3858280 ']' 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3858280 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.429 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3858280 00:27:30.687 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:30.687 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:30.687 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3858280' 00:27:30.687 killing process with pid 3858280 00:27:30.687 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3858280 00:27:30.687 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3858280 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.946 09:13:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:32.846 00:27:32.846 real 0m5.231s 00:27:32.846 user 0m4.112s 00:27:32.846 sys 0m1.785s 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:32.846 ************************************ 00:27:32.846 END TEST nvmf_aer 00:27:32.846 ************************************ 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.846 ************************************ 00:27:32.846 START TEST nvmf_async_init 00:27:32.846 ************************************ 00:27:32.846 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:32.846 * Looking for test storage... 00:27:33.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.104 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=bd5a079cadb94b20a492c9133f6d9fbd 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:33.105 09:13:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:35.006 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:35.006 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:35.006 Found net devices under 0000:09:00.0: cvl_0_0 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:35.006 Found net devices under 0000:09:00.1: cvl_0_1 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.006 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.007 09:13:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:27:35.007 00:27:35.007 --- 10.0.0.2 ping statistics --- 00:27:35.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.007 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:27:35.007 00:27:35.007 --- 10.0.0.1 ping statistics --- 00:27:35.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.007 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.007 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3860356 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3860356 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3860356 ']' 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.265 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.265 [2024-07-24 09:13:13.185770] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:35.265 [2024-07-24 09:13:13.185844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.265 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.265 [2024-07-24 09:13:13.222335] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:35.265 [2024-07-24 09:13:13.249049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.265 [2024-07-24 09:13:13.333112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:35.265 [2024-07-24 09:13:13.333165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:35.265 [2024-07-24 09:13:13.333178] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:35.265 [2024-07-24 09:13:13.333190] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:35.265 [2024-07-24 09:13:13.333199] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:35.265 [2024-07-24 09:13:13.333233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.523 [2024-07-24 09:13:13.465534] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.523 null0 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g bd5a079cadb94b20a492c9133f6d9fbd 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.523 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.524 [2024-07-24 09:13:13.505782] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.524 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.524 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:35.524 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.524 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.781 nvme0n1 00:27:35.781 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.781 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:35.781 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.781 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.781 [ 00:27:35.781 { 00:27:35.781 "name": "nvme0n1", 00:27:35.781 "aliases": [ 00:27:35.781 "bd5a079c-adb9-4b20-a492-c9133f6d9fbd" 00:27:35.781 ], 00:27:35.781 "product_name": "NVMe disk", 00:27:35.781 "block_size": 512, 00:27:35.781 "num_blocks": 2097152, 00:27:35.781 "uuid": "bd5a079c-adb9-4b20-a492-c9133f6d9fbd", 00:27:35.781 "assigned_rate_limits": { 00:27:35.781 "rw_ios_per_sec": 0, 00:27:35.781 "rw_mbytes_per_sec": 0, 00:27:35.781 "r_mbytes_per_sec": 0, 00:27:35.781 "w_mbytes_per_sec": 0 00:27:35.781 }, 00:27:35.781 "claimed": false, 00:27:35.781 "zoned": false, 00:27:35.781 "supported_io_types": { 00:27:35.781 "read": true, 00:27:35.781 "write": true, 00:27:35.781 "unmap": false, 00:27:35.781 "flush": true, 00:27:35.781 "reset": true, 00:27:35.781 "nvme_admin": true, 00:27:35.781 "nvme_io": true, 00:27:35.781 "nvme_io_md": false, 00:27:35.781 "write_zeroes": true, 00:27:35.781 "zcopy": false, 00:27:35.781 "get_zone_info": false, 00:27:35.781 "zone_management": false, 00:27:35.781 "zone_append": false, 00:27:35.781 "compare": true, 00:27:35.781 "compare_and_write": true, 00:27:35.781 "abort": true, 00:27:35.781 "seek_hole": false, 00:27:35.781 "seek_data": false, 00:27:35.781 "copy": true, 00:27:35.781 "nvme_iov_md": false 00:27:35.781 }, 00:27:35.781 "memory_domains": [ 00:27:35.781 { 00:27:35.781 "dma_device_id": "system", 00:27:35.781 "dma_device_type": 1 00:27:35.781 } 00:27:35.781 ], 00:27:35.781 "driver_specific": { 00:27:35.781 "nvme": [ 00:27:35.781 { 00:27:35.781 "trid": { 00:27:35.781 "trtype": "TCP", 00:27:35.781 "adrfam": "IPv4", 00:27:35.781 "traddr": "10.0.0.2", 00:27:35.781 "trsvcid": "4420", 00:27:35.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:35.782 }, 00:27:35.782 "ctrlr_data": { 00:27:35.782 "cntlid": 1, 00:27:35.782 "vendor_id": "0x8086", 00:27:35.782 "model_number": "SPDK bdev Controller", 00:27:35.782 "serial_number": "00000000000000000000", 00:27:35.782 "firmware_revision": "24.09", 00:27:35.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:35.782 "oacs": { 00:27:35.782 "security": 0, 00:27:35.782 "format": 0, 00:27:35.782 "firmware": 0, 00:27:35.782 "ns_manage": 0 00:27:35.782 }, 00:27:35.782 "multi_ctrlr": true, 00:27:35.782 "ana_reporting": false 00:27:35.782 }, 00:27:35.782 "vs": { 00:27:35.782 "nvme_version": "1.3" 00:27:35.782 }, 00:27:35.782 "ns_data": { 00:27:35.782 "id": 1, 00:27:35.782 "can_share": true 00:27:35.782 } 00:27:35.782 } 00:27:35.782 ], 00:27:35.782 "mp_policy": "active_passive" 00:27:35.782 } 00:27:35.782 } 00:27:35.782 ] 00:27:35.782 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.782 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:35.782 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.782 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:35.782 [2024-07-24 09:13:13.758356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:35.782 [2024-07-24 09:13:13.758448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb21850 (9): Bad file descriptor 00:27:36.040 [2024-07-24 09:13:13.901256] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:36.040 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.040 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:36.040 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.040 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.040 [ 00:27:36.040 { 00:27:36.040 "name": "nvme0n1", 00:27:36.040 "aliases": [ 00:27:36.040 "bd5a079c-adb9-4b20-a492-c9133f6d9fbd" 00:27:36.040 ], 00:27:36.040 "product_name": "NVMe disk", 00:27:36.040 "block_size": 512, 00:27:36.040 "num_blocks": 2097152, 00:27:36.040 "uuid": "bd5a079c-adb9-4b20-a492-c9133f6d9fbd", 00:27:36.040 "assigned_rate_limits": { 00:27:36.040 "rw_ios_per_sec": 0, 00:27:36.040 "rw_mbytes_per_sec": 0, 00:27:36.040 "r_mbytes_per_sec": 0, 00:27:36.040 "w_mbytes_per_sec": 0 00:27:36.040 }, 00:27:36.040 "claimed": false, 00:27:36.040 "zoned": false, 00:27:36.040 "supported_io_types": { 00:27:36.040 "read": true, 00:27:36.040 "write": true, 00:27:36.040 "unmap": false, 00:27:36.040 "flush": true, 00:27:36.040 "reset": true, 00:27:36.040 "nvme_admin": true, 00:27:36.040 "nvme_io": true, 00:27:36.040 "nvme_io_md": false, 00:27:36.040 "write_zeroes": true, 00:27:36.040 "zcopy": false, 00:27:36.040 "get_zone_info": false, 00:27:36.041 "zone_management": false, 00:27:36.041 "zone_append": false, 00:27:36.041 "compare": true, 00:27:36.041 "compare_and_write": true, 00:27:36.041 "abort": true, 00:27:36.041 "seek_hole": false, 00:27:36.041 "seek_data": false, 00:27:36.041 "copy": true, 00:27:36.041 "nvme_iov_md": false 00:27:36.041 }, 00:27:36.041 "memory_domains": [ 00:27:36.041 { 00:27:36.041 "dma_device_id": "system", 00:27:36.041 "dma_device_type": 1 00:27:36.041 } 00:27:36.041 ], 00:27:36.041 "driver_specific": { 00:27:36.041 "nvme": [ 00:27:36.041 { 00:27:36.041 "trid": { 00:27:36.041 "trtype": "TCP", 00:27:36.041 "adrfam": "IPv4", 00:27:36.041 "traddr": "10.0.0.2", 00:27:36.041 "trsvcid": "4420", 00:27:36.041 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:36.041 }, 00:27:36.041 "ctrlr_data": { 00:27:36.041 "cntlid": 2, 00:27:36.041 "vendor_id": "0x8086", 00:27:36.041 "model_number": "SPDK bdev Controller", 00:27:36.041 "serial_number": "00000000000000000000", 00:27:36.041 "firmware_revision": "24.09", 00:27:36.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.041 "oacs": { 00:27:36.041 "security": 0, 00:27:36.041 "format": 0, 00:27:36.041 "firmware": 0, 00:27:36.041 "ns_manage": 0 00:27:36.041 }, 00:27:36.041 "multi_ctrlr": true, 00:27:36.041 "ana_reporting": false 00:27:36.041 }, 00:27:36.041 "vs": { 00:27:36.041 "nvme_version": "1.3" 00:27:36.041 }, 00:27:36.041 "ns_data": { 00:27:36.041 "id": 1, 00:27:36.041 "can_share": true 00:27:36.041 } 00:27:36.041 } 00:27:36.041 ], 00:27:36.041 "mp_policy": "active_passive" 00:27:36.041 } 00:27:36.041 } 00:27:36.041 ] 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jHuL11e35s 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jHuL11e35s 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 [2024-07-24 09:13:13.951007] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:36.041 [2024-07-24 09:13:13.951144] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jHuL11e35s 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 [2024-07-24 09:13:13.959026] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jHuL11e35s 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 [2024-07-24 09:13:13.967056] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:36.041 [2024-07-24 09:13:13.967124] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:36.041 nvme0n1 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 [ 00:27:36.041 { 00:27:36.041 "name": "nvme0n1", 00:27:36.041 "aliases": [ 00:27:36.041 "bd5a079c-adb9-4b20-a492-c9133f6d9fbd" 00:27:36.041 ], 00:27:36.041 "product_name": "NVMe disk", 00:27:36.041 "block_size": 512, 00:27:36.041 "num_blocks": 2097152, 00:27:36.041 "uuid": "bd5a079c-adb9-4b20-a492-c9133f6d9fbd", 00:27:36.041 "assigned_rate_limits": { 00:27:36.041 "rw_ios_per_sec": 0, 00:27:36.041 "rw_mbytes_per_sec": 0, 00:27:36.041 "r_mbytes_per_sec": 0, 00:27:36.041 "w_mbytes_per_sec": 0 00:27:36.041 }, 00:27:36.041 "claimed": false, 00:27:36.041 "zoned": false, 00:27:36.041 "supported_io_types": { 00:27:36.041 "read": true, 00:27:36.041 "write": true, 00:27:36.041 "unmap": false, 00:27:36.041 "flush": true, 00:27:36.041 "reset": true, 00:27:36.041 "nvme_admin": true, 00:27:36.041 "nvme_io": true, 00:27:36.041 "nvme_io_md": false, 00:27:36.041 "write_zeroes": true, 00:27:36.041 "zcopy": false, 00:27:36.041 "get_zone_info": false, 00:27:36.041 "zone_management": false, 00:27:36.041 "zone_append": false, 00:27:36.041 "compare": true, 00:27:36.041 "compare_and_write": true, 00:27:36.041 "abort": true, 00:27:36.041 "seek_hole": false, 00:27:36.041 "seek_data": false, 00:27:36.041 "copy": true, 00:27:36.041 "nvme_iov_md": false 00:27:36.041 }, 00:27:36.041 "memory_domains": [ 00:27:36.041 { 00:27:36.041 "dma_device_id": "system", 00:27:36.041 "dma_device_type": 1 00:27:36.041 } 00:27:36.041 ], 00:27:36.041 "driver_specific": { 00:27:36.041 "nvme": [ 00:27:36.041 { 00:27:36.041 "trid": { 00:27:36.041 "trtype": "TCP", 00:27:36.041 "adrfam": "IPv4", 00:27:36.041 "traddr": "10.0.0.2", 00:27:36.041 "trsvcid": "4421", 00:27:36.041 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:36.041 }, 00:27:36.041 "ctrlr_data": { 00:27:36.041 "cntlid": 3, 00:27:36.041 "vendor_id": "0x8086", 00:27:36.041 "model_number": "SPDK bdev Controller", 00:27:36.041 "serial_number": "00000000000000000000", 00:27:36.041 "firmware_revision": "24.09", 00:27:36.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.041 "oacs": { 00:27:36.041 "security": 0, 00:27:36.041 "format": 0, 00:27:36.041 "firmware": 0, 00:27:36.041 "ns_manage": 0 00:27:36.041 }, 00:27:36.041 "multi_ctrlr": true, 00:27:36.041 "ana_reporting": false 00:27:36.041 }, 00:27:36.041 "vs": { 00:27:36.041 "nvme_version": "1.3" 00:27:36.041 }, 00:27:36.041 "ns_data": { 00:27:36.041 "id": 1, 00:27:36.041 "can_share": true 00:27:36.041 } 00:27:36.041 } 00:27:36.041 ], 00:27:36.041 "mp_policy": "active_passive" 00:27:36.041 } 00:27:36.041 } 00:27:36.041 ] 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.jHuL11e35s 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.041 rmmod nvme_tcp 00:27:36.041 rmmod nvme_fabrics 00:27:36.041 rmmod nvme_keyring 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3860356 ']' 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3860356 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3860356 ']' 00:27:36.041 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3860356 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3860356 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3860356' 00:27:36.042 killing process with pid 3860356 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3860356 00:27:36.042 [2024-07-24 09:13:14.152253] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:36.042 [2024-07-24 09:13:14.152302] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:36.042 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3860356 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.300 09:13:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:38.832 00:27:38.832 real 0m5.500s 00:27:38.832 user 0m2.040s 00:27:38.832 sys 0m1.835s 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:38.832 ************************************ 00:27:38.832 END TEST nvmf_async_init 00:27:38.832 ************************************ 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.832 ************************************ 00:27:38.832 START TEST dma 00:27:38.832 ************************************ 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:27:38.832 * Looking for test storage... 00:27:38.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.832 09:13:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:27:38.833 00:27:38.833 real 0m0.070s 00:27:38.833 user 0m0.035s 00:27:38.833 sys 0m0.041s 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:27:38.833 ************************************ 00:27:38.833 END TEST dma 00:27:38.833 ************************************ 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.833 ************************************ 00:27:38.833 START TEST nvmf_identify 00:27:38.833 ************************************ 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:27:38.833 * Looking for test storage... 00:27:38.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:27:38.833 09:13:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:40.734 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:40.734 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.734 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:40.735 Found net devices under 0000:09:00.0: cvl_0_0 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:40.735 Found net devices under 0000:09:00.1: cvl_0_1 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:40.735 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.735 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:27:40.735 00:27:40.735 --- 10.0.0.2 ping statistics --- 00:27:40.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.735 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:40.735 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.735 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:27:40.735 00:27:40.735 --- 10.0.0.1 ping statistics --- 00:27:40.735 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.735 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3862419 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3862419 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3862419 ']' 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:40.735 09:13:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:40.735 [2024-07-24 09:13:18.820414] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:40.735 [2024-07-24 09:13:18.820511] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.993 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.993 [2024-07-24 09:13:18.865823] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:40.993 [2024-07-24 09:13:18.896084] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:40.993 [2024-07-24 09:13:18.987731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.993 [2024-07-24 09:13:18.987802] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.994 [2024-07-24 09:13:18.987822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.994 [2024-07-24 09:13:18.987836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.994 [2024-07-24 09:13:18.987848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.994 [2024-07-24 09:13:18.987923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.994 [2024-07-24 09:13:18.987994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.994 [2024-07-24 09:13:18.988086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.994 [2024-07-24 09:13:18.988089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:40.994 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:40.994 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:27:40.994 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.994 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.994 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.280 [2024-07-24 09:13:19.110177] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.280 Malloc0 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.280 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.281 [2024-07-24 09:13:19.181463] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.281 [ 00:27:41.281 { 00:27:41.281 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:41.281 "subtype": "Discovery", 00:27:41.281 "listen_addresses": [ 00:27:41.281 { 00:27:41.281 "trtype": "TCP", 00:27:41.281 "adrfam": "IPv4", 00:27:41.281 "traddr": "10.0.0.2", 00:27:41.281 "trsvcid": "4420" 00:27:41.281 } 00:27:41.281 ], 00:27:41.281 "allow_any_host": true, 00:27:41.281 "hosts": [] 00:27:41.281 }, 00:27:41.281 { 00:27:41.281 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:41.281 "subtype": "NVMe", 00:27:41.281 "listen_addresses": [ 00:27:41.281 { 00:27:41.281 "trtype": "TCP", 00:27:41.281 "adrfam": "IPv4", 00:27:41.281 "traddr": "10.0.0.2", 00:27:41.281 "trsvcid": "4420" 00:27:41.281 } 00:27:41.281 ], 00:27:41.281 "allow_any_host": true, 00:27:41.281 "hosts": [], 00:27:41.281 "serial_number": "SPDK00000000000001", 00:27:41.281 "model_number": "SPDK bdev Controller", 00:27:41.281 "max_namespaces": 32, 00:27:41.281 "min_cntlid": 1, 00:27:41.281 "max_cntlid": 65519, 00:27:41.281 "namespaces": [ 00:27:41.281 { 00:27:41.281 "nsid": 1, 00:27:41.281 "bdev_name": "Malloc0", 00:27:41.281 "name": "Malloc0", 00:27:41.281 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:27:41.281 "eui64": "ABCDEF0123456789", 00:27:41.281 "uuid": "71d7235a-312e-4268-bcd7-656bc7b4214e" 00:27:41.281 } 00:27:41.281 ] 00:27:41.281 } 00:27:41.281 ] 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.281 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:27:41.281 [2024-07-24 09:13:19.220179] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:41.281 [2024-07-24 09:13:19.220222] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862503 ] 00:27:41.281 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.281 [2024-07-24 09:13:19.235755] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:41.281 [2024-07-24 09:13:19.253438] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:27:41.281 [2024-07-24 09:13:19.253495] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:41.281 [2024-07-24 09:13:19.253505] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:41.281 [2024-07-24 09:13:19.253519] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:41.281 [2024-07-24 09:13:19.253532] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:41.281 [2024-07-24 09:13:19.257148] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:27:41.281 [2024-07-24 09:13:19.257212] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22a6630 0 00:27:41.281 [2024-07-24 09:13:19.272125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:41.281 [2024-07-24 09:13:19.272152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:41.281 [2024-07-24 09:13:19.272163] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:41.281 [2024-07-24 09:13:19.272169] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:41.281 [2024-07-24 09:13:19.272223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.272236] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.272244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.281 [2024-07-24 09:13:19.272265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:41.281 [2024-07-24 09:13:19.272293] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.281 [2024-07-24 09:13:19.278114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.281 [2024-07-24 09:13:19.278132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.281 [2024-07-24 09:13:19.278139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.281 [2024-07-24 09:13:19.278174] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:41.281 [2024-07-24 09:13:19.278186] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:27:41.281 [2024-07-24 09:13:19.278196] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:27:41.281 [2024-07-24 09:13:19.278218] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.281 [2024-07-24 09:13:19.278245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.281 [2024-07-24 09:13:19.278268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.281 [2024-07-24 09:13:19.278405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.281 [2024-07-24 09:13:19.278418] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.281 [2024-07-24 09:13:19.278425] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.281 [2024-07-24 09:13:19.278447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:27:41.281 [2024-07-24 09:13:19.278461] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:27:41.281 [2024-07-24 09:13:19.278473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278481] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.281 [2024-07-24 09:13:19.278498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.281 [2024-07-24 09:13:19.278520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.281 [2024-07-24 09:13:19.278632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.281 [2024-07-24 09:13:19.278648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.281 [2024-07-24 09:13:19.278655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.281 [2024-07-24 09:13:19.278671] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:27:41.281 [2024-07-24 09:13:19.278686] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:27:41.281 [2024-07-24 09:13:19.278698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278706] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278713] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.281 [2024-07-24 09:13:19.278723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.281 [2024-07-24 09:13:19.278745] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.281 [2024-07-24 09:13:19.278843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.281 [2024-07-24 09:13:19.278856] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.281 [2024-07-24 09:13:19.278863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278870] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.281 [2024-07-24 09:13:19.278883] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:41.281 [2024-07-24 09:13:19.278901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278910] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.281 [2024-07-24 09:13:19.278917] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.281 [2024-07-24 09:13:19.278928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.281 [2024-07-24 09:13:19.278948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.281 [2024-07-24 09:13:19.279054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.281 [2024-07-24 09:13:19.279069] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.282 [2024-07-24 09:13:19.279076] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279083] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.282 [2024-07-24 09:13:19.279098] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:27:41.282 [2024-07-24 09:13:19.279116] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:27:41.282 [2024-07-24 09:13:19.279130] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:41.282 [2024-07-24 09:13:19.279240] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:27:41.282 [2024-07-24 09:13:19.279249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:41.282 [2024-07-24 09:13:19.279263] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279292] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.279303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.282 [2024-07-24 09:13:19.279325] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.282 [2024-07-24 09:13:19.279462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.282 [2024-07-24 09:13:19.279478] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.282 [2024-07-24 09:13:19.279485] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.282 [2024-07-24 09:13:19.279501] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:41.282 [2024-07-24 09:13:19.279518] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279527] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279534] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.279545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.282 [2024-07-24 09:13:19.279566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.282 [2024-07-24 09:13:19.279668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.282 [2024-07-24 09:13:19.279683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.282 [2024-07-24 09:13:19.279694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279702] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.282 [2024-07-24 09:13:19.279710] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:41.282 [2024-07-24 09:13:19.279719] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:27:41.282 [2024-07-24 09:13:19.279732] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:27:41.282 [2024-07-24 09:13:19.279747] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:27:41.282 [2024-07-24 09:13:19.279763] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279771] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.279783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.282 [2024-07-24 09:13:19.279804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.282 [2024-07-24 09:13:19.279943] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.282 [2024-07-24 09:13:19.279959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.282 [2024-07-24 09:13:19.279966] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.279974] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a6630): datao=0, datal=4096, cccid=0 00:27:41.282 [2024-07-24 09:13:19.279982] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f4f80) on tqpair(0x22a6630): expected_datao=0, payload_size=4096 00:27:41.282 [2024-07-24 09:13:19.279990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280012] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280021] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.282 [2024-07-24 09:13:19.280115] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.282 [2024-07-24 09:13:19.280123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280130] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.282 [2024-07-24 09:13:19.280142] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:27:41.282 [2024-07-24 09:13:19.280151] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:27:41.282 [2024-07-24 09:13:19.280159] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:27:41.282 [2024-07-24 09:13:19.280168] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:27:41.282 [2024-07-24 09:13:19.280177] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:27:41.282 [2024-07-24 09:13:19.280185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:27:41.282 [2024-07-24 09:13:19.280200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:27:41.282 [2024-07-24 09:13:19.280217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280233] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.280248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.282 [2024-07-24 09:13:19.280271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.282 [2024-07-24 09:13:19.280393] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.282 [2024-07-24 09:13:19.280411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.282 [2024-07-24 09:13:19.280418] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.282 [2024-07-24 09:13:19.280437] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280445] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280452] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.280462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.282 [2024-07-24 09:13:19.280472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280485] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.280494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.282 [2024-07-24 09:13:19.280504] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280511] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.280526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.282 [2024-07-24 09:13:19.280536] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280549] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.280558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.282 [2024-07-24 09:13:19.280566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:27:41.282 [2024-07-24 09:13:19.280586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:41.282 [2024-07-24 09:13:19.280598] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280606] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a6630) 00:27:41.282 [2024-07-24 09:13:19.280616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.282 [2024-07-24 09:13:19.280653] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f4f80, cid 0, qid 0 00:27:41.282 [2024-07-24 09:13:19.280665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5100, cid 1, qid 0 00:27:41.282 [2024-07-24 09:13:19.280673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5280, cid 2, qid 0 00:27:41.282 [2024-07-24 09:13:19.280680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.282 [2024-07-24 09:13:19.280688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5580, cid 4, qid 0 00:27:41.282 [2024-07-24 09:13:19.280857] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.282 [2024-07-24 09:13:19.280874] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.282 [2024-07-24 09:13:19.280882] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280889] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5580) on tqpair=0x22a6630 00:27:41.282 [2024-07-24 09:13:19.280898] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:27:41.282 [2024-07-24 09:13:19.280907] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:27:41.282 [2024-07-24 09:13:19.280924] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.282 [2024-07-24 09:13:19.280933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a6630) 00:27:41.283 [2024-07-24 09:13:19.280944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.283 [2024-07-24 09:13:19.280965] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5580, cid 4, qid 0 00:27:41.283 [2024-07-24 09:13:19.281087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.283 [2024-07-24 09:13:19.281117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.283 [2024-07-24 09:13:19.281126] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.281132] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a6630): datao=0, datal=4096, cccid=4 00:27:41.283 [2024-07-24 09:13:19.281140] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5580) on tqpair(0x22a6630): expected_datao=0, payload_size=4096 00:27:41.283 [2024-07-24 09:13:19.281148] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.281165] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.281174] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323213] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.283 [2024-07-24 09:13:19.323232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.283 [2024-07-24 09:13:19.323239] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5580) on tqpair=0x22a6630 00:27:41.283 [2024-07-24 09:13:19.323266] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:27:41.283 [2024-07-24 09:13:19.323306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a6630) 00:27:41.283 [2024-07-24 09:13:19.323329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.283 [2024-07-24 09:13:19.323340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22a6630) 00:27:41.283 [2024-07-24 09:13:19.323363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.283 [2024-07-24 09:13:19.323399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5580, cid 4, qid 0 00:27:41.283 [2024-07-24 09:13:19.323411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5700, cid 5, qid 0 00:27:41.283 [2024-07-24 09:13:19.323548] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.283 [2024-07-24 09:13:19.323560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.283 [2024-07-24 09:13:19.323567] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323573] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a6630): datao=0, datal=1024, cccid=4 00:27:41.283 [2024-07-24 09:13:19.323585] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5580) on tqpair(0x22a6630): expected_datao=0, payload_size=1024 00:27:41.283 [2024-07-24 09:13:19.323593] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323603] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323611] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.283 [2024-07-24 09:13:19.323629] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.283 [2024-07-24 09:13:19.323635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.323642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5700) on tqpair=0x22a6630 00:27:41.283 [2024-07-24 09:13:19.369119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.283 [2024-07-24 09:13:19.369138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.283 [2024-07-24 09:13:19.369145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369152] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5580) on tqpair=0x22a6630 00:27:41.283 [2024-07-24 09:13:19.369170] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369180] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a6630) 00:27:41.283 [2024-07-24 09:13:19.369191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.283 [2024-07-24 09:13:19.369221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5580, cid 4, qid 0 00:27:41.283 [2024-07-24 09:13:19.369373] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.283 [2024-07-24 09:13:19.369388] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.283 [2024-07-24 09:13:19.369395] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369402] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a6630): datao=0, datal=3072, cccid=4 00:27:41.283 [2024-07-24 09:13:19.369410] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5580) on tqpair(0x22a6630): expected_datao=0, payload_size=3072 00:27:41.283 [2024-07-24 09:13:19.369417] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369427] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369435] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369447] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.283 [2024-07-24 09:13:19.369457] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.283 [2024-07-24 09:13:19.369463] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5580) on tqpair=0x22a6630 00:27:41.283 [2024-07-24 09:13:19.369485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22a6630) 00:27:41.283 [2024-07-24 09:13:19.369504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.283 [2024-07-24 09:13:19.369532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5580, cid 4, qid 0 00:27:41.283 [2024-07-24 09:13:19.369658] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.283 [2024-07-24 09:13:19.369670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.283 [2024-07-24 09:13:19.369676] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369683] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22a6630): datao=0, datal=8, cccid=4 00:27:41.283 [2024-07-24 09:13:19.369691] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22f5580) on tqpair(0x22a6630): expected_datao=0, payload_size=8 00:27:41.283 [2024-07-24 09:13:19.369705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369715] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.283 [2024-07-24 09:13:19.369723] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.546 [2024-07-24 09:13:19.415116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.546 [2024-07-24 09:13:19.415134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.546 [2024-07-24 09:13:19.415142] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.546 [2024-07-24 09:13:19.415163] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5580) on tqpair=0x22a6630 00:27:41.546 ===================================================== 00:27:41.546 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:41.546 ===================================================== 00:27:41.546 Controller Capabilities/Features 00:27:41.546 ================================ 00:27:41.546 Vendor ID: 0000 00:27:41.546 Subsystem Vendor ID: 0000 00:27:41.546 Serial Number: .................... 00:27:41.546 Model Number: ........................................ 00:27:41.546 Firmware Version: 24.09 00:27:41.546 Recommended Arb Burst: 0 00:27:41.546 IEEE OUI Identifier: 00 00 00 00:27:41.546 Multi-path I/O 00:27:41.546 May have multiple subsystem ports: No 00:27:41.546 May have multiple controllers: No 00:27:41.546 Associated with SR-IOV VF: No 00:27:41.546 Max Data Transfer Size: 131072 00:27:41.546 Max Number of Namespaces: 0 00:27:41.546 Max Number of I/O Queues: 1024 00:27:41.546 NVMe Specification Version (VS): 1.3 00:27:41.546 NVMe Specification Version (Identify): 1.3 00:27:41.546 Maximum Queue Entries: 128 00:27:41.546 Contiguous Queues Required: Yes 00:27:41.546 Arbitration Mechanisms Supported 00:27:41.546 Weighted Round Robin: Not Supported 00:27:41.546 Vendor Specific: Not Supported 00:27:41.546 Reset Timeout: 15000 ms 00:27:41.546 Doorbell Stride: 4 bytes 00:27:41.546 NVM Subsystem Reset: Not Supported 00:27:41.546 Command Sets Supported 00:27:41.546 NVM Command Set: Supported 00:27:41.546 Boot Partition: Not Supported 00:27:41.546 Memory Page Size Minimum: 4096 bytes 00:27:41.546 Memory Page Size Maximum: 4096 bytes 00:27:41.546 Persistent Memory Region: Not Supported 00:27:41.546 Optional Asynchronous Events Supported 00:27:41.546 Namespace Attribute Notices: Not Supported 00:27:41.546 Firmware Activation Notices: Not Supported 00:27:41.546 ANA Change Notices: Not Supported 00:27:41.546 PLE Aggregate Log Change Notices: Not Supported 00:27:41.546 LBA Status Info Alert Notices: Not Supported 00:27:41.546 EGE Aggregate Log Change Notices: Not Supported 00:27:41.546 Normal NVM Subsystem Shutdown event: Not Supported 00:27:41.546 Zone Descriptor Change Notices: Not Supported 00:27:41.546 Discovery Log Change Notices: Supported 00:27:41.546 Controller Attributes 00:27:41.546 128-bit Host Identifier: Not Supported 00:27:41.546 Non-Operational Permissive Mode: Not Supported 00:27:41.546 NVM Sets: Not Supported 00:27:41.546 Read Recovery Levels: Not Supported 00:27:41.546 Endurance Groups: Not Supported 00:27:41.546 Predictable Latency Mode: Not Supported 00:27:41.546 Traffic Based Keep ALive: Not Supported 00:27:41.546 Namespace Granularity: Not Supported 00:27:41.546 SQ Associations: Not Supported 00:27:41.546 UUID List: Not Supported 00:27:41.546 Multi-Domain Subsystem: Not Supported 00:27:41.546 Fixed Capacity Management: Not Supported 00:27:41.546 Variable Capacity Management: Not Supported 00:27:41.546 Delete Endurance Group: Not Supported 00:27:41.546 Delete NVM Set: Not Supported 00:27:41.546 Extended LBA Formats Supported: Not Supported 00:27:41.546 Flexible Data Placement Supported: Not Supported 00:27:41.546 00:27:41.546 Controller Memory Buffer Support 00:27:41.546 ================================ 00:27:41.546 Supported: No 00:27:41.546 00:27:41.546 Persistent Memory Region Support 00:27:41.546 ================================ 00:27:41.546 Supported: No 00:27:41.546 00:27:41.546 Admin Command Set Attributes 00:27:41.546 ============================ 00:27:41.546 Security Send/Receive: Not Supported 00:27:41.546 Format NVM: Not Supported 00:27:41.546 Firmware Activate/Download: Not Supported 00:27:41.546 Namespace Management: Not Supported 00:27:41.546 Device Self-Test: Not Supported 00:27:41.546 Directives: Not Supported 00:27:41.546 NVMe-MI: Not Supported 00:27:41.546 Virtualization Management: Not Supported 00:27:41.546 Doorbell Buffer Config: Not Supported 00:27:41.546 Get LBA Status Capability: Not Supported 00:27:41.546 Command & Feature Lockdown Capability: Not Supported 00:27:41.546 Abort Command Limit: 1 00:27:41.546 Async Event Request Limit: 4 00:27:41.546 Number of Firmware Slots: N/A 00:27:41.546 Firmware Slot 1 Read-Only: N/A 00:27:41.546 Firmware Activation Without Reset: N/A 00:27:41.546 Multiple Update Detection Support: N/A 00:27:41.546 Firmware Update Granularity: No Information Provided 00:27:41.546 Per-Namespace SMART Log: No 00:27:41.546 Asymmetric Namespace Access Log Page: Not Supported 00:27:41.546 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:41.546 Command Effects Log Page: Not Supported 00:27:41.546 Get Log Page Extended Data: Supported 00:27:41.546 Telemetry Log Pages: Not Supported 00:27:41.546 Persistent Event Log Pages: Not Supported 00:27:41.546 Supported Log Pages Log Page: May Support 00:27:41.546 Commands Supported & Effects Log Page: Not Supported 00:27:41.546 Feature Identifiers & Effects Log Page:May Support 00:27:41.546 NVMe-MI Commands & Effects Log Page: May Support 00:27:41.546 Data Area 4 for Telemetry Log: Not Supported 00:27:41.546 Error Log Page Entries Supported: 128 00:27:41.546 Keep Alive: Not Supported 00:27:41.546 00:27:41.546 NVM Command Set Attributes 00:27:41.547 ========================== 00:27:41.547 Submission Queue Entry Size 00:27:41.547 Max: 1 00:27:41.547 Min: 1 00:27:41.547 Completion Queue Entry Size 00:27:41.547 Max: 1 00:27:41.547 Min: 1 00:27:41.547 Number of Namespaces: 0 00:27:41.547 Compare Command: Not Supported 00:27:41.547 Write Uncorrectable Command: Not Supported 00:27:41.547 Dataset Management Command: Not Supported 00:27:41.547 Write Zeroes Command: Not Supported 00:27:41.547 Set Features Save Field: Not Supported 00:27:41.547 Reservations: Not Supported 00:27:41.547 Timestamp: Not Supported 00:27:41.547 Copy: Not Supported 00:27:41.547 Volatile Write Cache: Not Present 00:27:41.547 Atomic Write Unit (Normal): 1 00:27:41.547 Atomic Write Unit (PFail): 1 00:27:41.547 Atomic Compare & Write Unit: 1 00:27:41.547 Fused Compare & Write: Supported 00:27:41.547 Scatter-Gather List 00:27:41.547 SGL Command Set: Supported 00:27:41.547 SGL Keyed: Supported 00:27:41.547 SGL Bit Bucket Descriptor: Not Supported 00:27:41.547 SGL Metadata Pointer: Not Supported 00:27:41.547 Oversized SGL: Not Supported 00:27:41.547 SGL Metadata Address: Not Supported 00:27:41.547 SGL Offset: Supported 00:27:41.547 Transport SGL Data Block: Not Supported 00:27:41.547 Replay Protected Memory Block: Not Supported 00:27:41.547 00:27:41.547 Firmware Slot Information 00:27:41.547 ========================= 00:27:41.547 Active slot: 0 00:27:41.547 00:27:41.547 00:27:41.547 Error Log 00:27:41.547 ========= 00:27:41.547 00:27:41.547 Active Namespaces 00:27:41.547 ================= 00:27:41.547 Discovery Log Page 00:27:41.547 ================== 00:27:41.547 Generation Counter: 2 00:27:41.547 Number of Records: 2 00:27:41.547 Record Format: 0 00:27:41.547 00:27:41.547 Discovery Log Entry 0 00:27:41.547 ---------------------- 00:27:41.547 Transport Type: 3 (TCP) 00:27:41.547 Address Family: 1 (IPv4) 00:27:41.547 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:41.547 Entry Flags: 00:27:41.547 Duplicate Returned Information: 1 00:27:41.547 Explicit Persistent Connection Support for Discovery: 1 00:27:41.547 Transport Requirements: 00:27:41.547 Secure Channel: Not Required 00:27:41.547 Port ID: 0 (0x0000) 00:27:41.547 Controller ID: 65535 (0xffff) 00:27:41.547 Admin Max SQ Size: 128 00:27:41.547 Transport Service Identifier: 4420 00:27:41.547 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:41.547 Transport Address: 10.0.0.2 00:27:41.547 Discovery Log Entry 1 00:27:41.547 ---------------------- 00:27:41.547 Transport Type: 3 (TCP) 00:27:41.547 Address Family: 1 (IPv4) 00:27:41.547 Subsystem Type: 2 (NVM Subsystem) 00:27:41.547 Entry Flags: 00:27:41.547 Duplicate Returned Information: 0 00:27:41.547 Explicit Persistent Connection Support for Discovery: 0 00:27:41.547 Transport Requirements: 00:27:41.547 Secure Channel: Not Required 00:27:41.547 Port ID: 0 (0x0000) 00:27:41.547 Controller ID: 65535 (0xffff) 00:27:41.547 Admin Max SQ Size: 128 00:27:41.547 Transport Service Identifier: 4420 00:27:41.547 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:27:41.547 Transport Address: 10.0.0.2 [2024-07-24 09:13:19.415273] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:27:41.547 [2024-07-24 09:13:19.415295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f4f80) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.415308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.547 [2024-07-24 09:13:19.415317] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5100) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.415324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.547 [2024-07-24 09:13:19.415332] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5280) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.415340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.547 [2024-07-24 09:13:19.415348] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.415356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.547 [2024-07-24 09:13:19.415373] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415382] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415389] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.547 [2024-07-24 09:13:19.415400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.547 [2024-07-24 09:13:19.415425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.547 [2024-07-24 09:13:19.415546] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.547 [2024-07-24 09:13:19.415560] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.547 [2024-07-24 09:13:19.415566] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.415585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415593] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.547 [2024-07-24 09:13:19.415609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.547 [2024-07-24 09:13:19.415635] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.547 [2024-07-24 09:13:19.415758] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.547 [2024-07-24 09:13:19.415773] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.547 [2024-07-24 09:13:19.415780] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.415795] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:27:41.547 [2024-07-24 09:13:19.415808] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:27:41.547 [2024-07-24 09:13:19.415825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.415841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.547 [2024-07-24 09:13:19.415851] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.547 [2024-07-24 09:13:19.415872] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.547 [2024-07-24 09:13:19.415985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.547 [2024-07-24 09:13:19.416000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.547 [2024-07-24 09:13:19.416007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.416014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.416031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.416040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.416047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.547 [2024-07-24 09:13:19.416058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.547 [2024-07-24 09:13:19.416078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.547 [2024-07-24 09:13:19.416196] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.547 [2024-07-24 09:13:19.416210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.547 [2024-07-24 09:13:19.416217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.416223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.547 [2024-07-24 09:13:19.416239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.416249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.547 [2024-07-24 09:13:19.416256] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.547 [2024-07-24 09:13:19.416266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.547 [2024-07-24 09:13:19.416287] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.547 [2024-07-24 09:13:19.416390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.547 [2024-07-24 09:13:19.416405] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.416412] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.416435] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416451] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.416462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.416482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.416591] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.416606] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.416613] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416624] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.416641] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416657] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.416667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.416688] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.416788] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.416800] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.416807] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416813] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.416829] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416839] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.416846] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.416856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.416877] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.416984] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.416999] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.417006] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417012] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.417029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417038] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.417055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.417076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.417187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.417203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.417209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.417232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.417259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.417280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.417381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.417396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.417403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417409] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.417429] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.417456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.417477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.417577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.417592] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.417599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417606] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.417622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417631] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.417649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.417669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.417768] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.417780] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.417787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.417810] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417819] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417826] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.417836] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.417857] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.417962] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.417974] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.417981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.417988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.418003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.418030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.418050] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.418157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.418171] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.418177] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418184] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.418200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418213] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.418231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.418252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.418358] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.418370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.418377] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418383] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.418399] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418415] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.548 [2024-07-24 09:13:19.418426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.548 [2024-07-24 09:13:19.418446] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.548 [2024-07-24 09:13:19.418545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.548 [2024-07-24 09:13:19.418556] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.548 [2024-07-24 09:13:19.418563] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418570] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.548 [2024-07-24 09:13:19.418585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.548 [2024-07-24 09:13:19.418601] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.549 [2024-07-24 09:13:19.418612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.418632] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.549 [2024-07-24 09:13:19.418739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.418754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.418761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.418767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.549 [2024-07-24 09:13:19.418784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.418793] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.418800] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.549 [2024-07-24 09:13:19.418810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.418831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.549 [2024-07-24 09:13:19.418934] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.418946] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.418952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.418959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.549 [2024-07-24 09:13:19.418975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.418984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.418995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.549 [2024-07-24 09:13:19.419005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.419026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.549 [2024-07-24 09:13:19.423115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.423132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.423139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.423145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.549 [2024-07-24 09:13:19.423179] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.423189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.423196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22a6630) 00:27:41.549 [2024-07-24 09:13:19.423206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.423229] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22f5400, cid 3, qid 0 00:27:41.549 [2024-07-24 09:13:19.423342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.423354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.423361] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.423368] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x22f5400) on tqpair=0x22a6630 00:27:41.549 [2024-07-24 09:13:19.423381] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:27:41.549 00:27:41.549 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:27:41.549 [2024-07-24 09:13:19.451194] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:41.549 [2024-07-24 09:13:19.451236] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3862512 ] 00:27:41.549 EAL: No free 2048 kB hugepages reported on node 1 00:27:41.549 [2024-07-24 09:13:19.467392] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:41.549 [2024-07-24 09:13:19.484877] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:27:41.549 [2024-07-24 09:13:19.484922] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:27:41.549 [2024-07-24 09:13:19.484931] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:27:41.549 [2024-07-24 09:13:19.484944] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:27:41.549 [2024-07-24 09:13:19.484955] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:27:41.549 [2024-07-24 09:13:19.485237] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:27:41.549 [2024-07-24 09:13:19.485278] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1e43630 0 00:27:41.549 [2024-07-24 09:13:19.492111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:27:41.549 [2024-07-24 09:13:19.492138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:27:41.549 [2024-07-24 09:13:19.492148] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:27:41.549 [2024-07-24 09:13:19.492154] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:27:41.549 [2024-07-24 09:13:19.492208] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.492220] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.492227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.549 [2024-07-24 09:13:19.492241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:27:41.549 [2024-07-24 09:13:19.492268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.549 [2024-07-24 09:13:19.499114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.499132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.499139] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.549 [2024-07-24 09:13:19.499161] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:27:41.549 [2024-07-24 09:13:19.499171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:27:41.549 [2024-07-24 09:13:19.499181] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:27:41.549 [2024-07-24 09:13:19.499199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.549 [2024-07-24 09:13:19.499227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.499251] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.549 [2024-07-24 09:13:19.499395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.499414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.499421] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499428] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.549 [2024-07-24 09:13:19.499440] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:27:41.549 [2024-07-24 09:13:19.499455] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:27:41.549 [2024-07-24 09:13:19.499467] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499475] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.549 [2024-07-24 09:13:19.499492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.499514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.549 [2024-07-24 09:13:19.499619] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.499631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.499638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.549 [2024-07-24 09:13:19.499654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:27:41.549 [2024-07-24 09:13:19.499672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:27:41.549 [2024-07-24 09:13:19.499685] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.549 [2024-07-24 09:13:19.499709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.549 [2024-07-24 09:13:19.499731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.549 [2024-07-24 09:13:19.499830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.549 [2024-07-24 09:13:19.499842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.549 [2024-07-24 09:13:19.499849] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.549 [2024-07-24 09:13:19.499856] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.549 [2024-07-24 09:13:19.499864] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:27:41.549 [2024-07-24 09:13:19.499881] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.499890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.499896] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.499907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-24 09:13:19.499928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.550 [2024-07-24 09:13:19.500034] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-24 09:13:19.500050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-24 09:13:19.500057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.550 [2024-07-24 09:13:19.500071] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:27:41.550 [2024-07-24 09:13:19.500080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:27:41.550 [2024-07-24 09:13:19.500093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:27:41.550 [2024-07-24 09:13:19.500215] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:27:41.550 [2024-07-24 09:13:19.500224] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:27:41.550 [2024-07-24 09:13:19.500236] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.500260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-24 09:13:19.500282] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.550 [2024-07-24 09:13:19.500428] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-24 09:13:19.500440] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-24 09:13:19.500447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.550 [2024-07-24 09:13:19.500466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:27:41.550 [2024-07-24 09:13:19.500483] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500492] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.500509] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-24 09:13:19.500530] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.550 [2024-07-24 09:13:19.500633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-24 09:13:19.500648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-24 09:13:19.500655] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500662] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.550 [2024-07-24 09:13:19.500669] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:27:41.550 [2024-07-24 09:13:19.500678] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:27:41.550 [2024-07-24 09:13:19.500691] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:27:41.550 [2024-07-24 09:13:19.500705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:27:41.550 [2024-07-24 09:13:19.500719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500726] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.500738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-24 09:13:19.500759] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.550 [2024-07-24 09:13:19.500904] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.550 [2024-07-24 09:13:19.500919] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.550 [2024-07-24 09:13:19.500926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500933] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=4096, cccid=0 00:27:41.550 [2024-07-24 09:13:19.500940] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e91f80) on tqpair(0x1e43630): expected_datao=0, payload_size=4096 00:27:41.550 [2024-07-24 09:13:19.500948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500959] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500966] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.500978] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-24 09:13:19.500988] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-24 09:13:19.500995] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.550 [2024-07-24 09:13:19.501013] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:27:41.550 [2024-07-24 09:13:19.501021] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:27:41.550 [2024-07-24 09:13:19.501029] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:27:41.550 [2024-07-24 09:13:19.501039] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:27:41.550 [2024-07-24 09:13:19.501048] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:27:41.550 [2024-07-24 09:13:19.501056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:27:41.550 [2024-07-24 09:13:19.501070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:27:41.550 [2024-07-24 09:13:19.501086] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.501120] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.550 [2024-07-24 09:13:19.501142] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.550 [2024-07-24 09:13:19.501257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.550 [2024-07-24 09:13:19.501273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.550 [2024-07-24 09:13:19.501279] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501286] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.550 [2024-07-24 09:13:19.501297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.501321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-24 09:13:19.501331] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501338] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501344] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.501353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-24 09:13:19.501362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.501384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-24 09:13:19.501394] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501407] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.501416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.550 [2024-07-24 09:13:19.501440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:27:41.550 [2024-07-24 09:13:19.501459] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:27:41.550 [2024-07-24 09:13:19.501472] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.550 [2024-07-24 09:13:19.501479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.550 [2024-07-24 09:13:19.501492] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.550 [2024-07-24 09:13:19.501515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e91f80, cid 0, qid 0 00:27:41.550 [2024-07-24 09:13:19.501541] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92100, cid 1, qid 0 00:27:41.550 [2024-07-24 09:13:19.501549] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92280, cid 2, qid 0 00:27:41.550 [2024-07-24 09:13:19.501557] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.550 [2024-07-24 09:13:19.501564] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.550 [2024-07-24 09:13:19.501716] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-24 09:13:19.501732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-24 09:13:19.501739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.501746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.551 [2024-07-24 09:13:19.501754] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:27:41.551 [2024-07-24 09:13:19.501762] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.501781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.501794] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.501805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.501812] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.501819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.551 [2024-07-24 09:13:19.501845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:27:41.551 [2024-07-24 09:13:19.501867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.551 [2024-07-24 09:13:19.502031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-24 09:13:19.502047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-24 09:13:19.502054] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.551 [2024-07-24 09:13:19.502135] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.502156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.502171] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502178] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.551 [2024-07-24 09:13:19.502189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-24 09:13:19.502225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.551 [2024-07-24 09:13:19.502415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-24 09:13:19.502428] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-24 09:13:19.502435] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502445] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=4096, cccid=4 00:27:41.551 [2024-07-24 09:13:19.502453] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92580) on tqpair(0x1e43630): expected_datao=0, payload_size=4096 00:27:41.551 [2024-07-24 09:13:19.502461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502478] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502487] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-24 09:13:19.502534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-24 09:13:19.502541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502548] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.551 [2024-07-24 09:13:19.502563] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:27:41.551 [2024-07-24 09:13:19.502581] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.502599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.502612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.551 [2024-07-24 09:13:19.502631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-24 09:13:19.502652] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.551 [2024-07-24 09:13:19.502772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-24 09:13:19.502784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-24 09:13:19.502791] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502797] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=4096, cccid=4 00:27:41.551 [2024-07-24 09:13:19.502805] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92580) on tqpair(0x1e43630): expected_datao=0, payload_size=4096 00:27:41.551 [2024-07-24 09:13:19.502812] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502837] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-24 09:13:19.502888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-24 09:13:19.502895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.551 [2024-07-24 09:13:19.502922] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.502941] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.502955] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.502962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.551 [2024-07-24 09:13:19.502973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.551 [2024-07-24 09:13:19.502994] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.551 [2024-07-24 09:13:19.507116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.551 [2024-07-24 09:13:19.507134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.551 [2024-07-24 09:13:19.507141] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.507147] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=4096, cccid=4 00:27:41.551 [2024-07-24 09:13:19.507169] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92580) on tqpair(0x1e43630): expected_datao=0, payload_size=4096 00:27:41.551 [2024-07-24 09:13:19.507177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.507188] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.507196] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.507204] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.551 [2024-07-24 09:13:19.507214] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.551 [2024-07-24 09:13:19.507220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.551 [2024-07-24 09:13:19.507227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.551 [2024-07-24 09:13:19.507240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.507257] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.507273] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.507286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:27:41.551 [2024-07-24 09:13:19.507296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:27:41.552 [2024-07-24 09:13:19.507304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:27:41.552 [2024-07-24 09:13:19.507314] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:27:41.552 [2024-07-24 09:13:19.507322] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:27:41.552 [2024-07-24 09:13:19.507330] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:27:41.552 [2024-07-24 09:13:19.507349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507358] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.507369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.507380] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.507403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:27:41.552 [2024-07-24 09:13:19.507444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.552 [2024-07-24 09:13:19.507456] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92700, cid 5, qid 0 00:27:41.552 [2024-07-24 09:13:19.507639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-24 09:13:19.507652] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-24 09:13:19.507663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.552 [2024-07-24 09:13:19.507680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-24 09:13:19.507690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-24 09:13:19.507696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92700) on tqpair=0x1e43630 00:27:41.552 [2024-07-24 09:13:19.507719] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.507739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.507760] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92700, cid 5, qid 0 00:27:41.552 [2024-07-24 09:13:19.507875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-24 09:13:19.507888] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-24 09:13:19.507894] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92700) on tqpair=0x1e43630 00:27:41.552 [2024-07-24 09:13:19.507916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.507925] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.507935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.507955] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92700, cid 5, qid 0 00:27:41.552 [2024-07-24 09:13:19.508060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-24 09:13:19.508075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-24 09:13:19.508082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92700) on tqpair=0x1e43630 00:27:41.552 [2024-07-24 09:13:19.508112] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508122] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.508133] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.508154] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92700, cid 5, qid 0 00:27:41.552 [2024-07-24 09:13:19.508254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-24 09:13:19.508266] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-24 09:13:19.508273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508280] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92700) on tqpair=0x1e43630 00:27:41.552 [2024-07-24 09:13:19.508303] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508314] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.508324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.508336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508343] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.508353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.508368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.508386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.508397] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508405] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e43630) 00:27:41.552 [2024-07-24 09:13:19.508414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.552 [2024-07-24 09:13:19.508451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92700, cid 5, qid 0 00:27:41.552 [2024-07-24 09:13:19.508462] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92580, cid 4, qid 0 00:27:41.552 [2024-07-24 09:13:19.508470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92880, cid 6, qid 0 00:27:41.552 [2024-07-24 09:13:19.508477] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a00, cid 7, qid 0 00:27:41.552 [2024-07-24 09:13:19.508785] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.552 [2024-07-24 09:13:19.508802] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.552 [2024-07-24 09:13:19.508809] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508815] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=8192, cccid=5 00:27:41.552 [2024-07-24 09:13:19.508823] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92700) on tqpair(0x1e43630): expected_datao=0, payload_size=8192 00:27:41.552 [2024-07-24 09:13:19.508830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508841] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508849] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508858] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.552 [2024-07-24 09:13:19.508867] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.552 [2024-07-24 09:13:19.508873] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508880] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=512, cccid=4 00:27:41.552 [2024-07-24 09:13:19.508888] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92580) on tqpair(0x1e43630): expected_datao=0, payload_size=512 00:27:41.552 [2024-07-24 09:13:19.508895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508904] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508911] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.552 [2024-07-24 09:13:19.508929] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.552 [2024-07-24 09:13:19.508935] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508942] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=512, cccid=6 00:27:41.552 [2024-07-24 09:13:19.508949] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92880) on tqpair(0x1e43630): expected_datao=0, payload_size=512 00:27:41.552 [2024-07-24 09:13:19.508957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508966] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508973] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.508982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:27:41.552 [2024-07-24 09:13:19.508994] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:27:41.552 [2024-07-24 09:13:19.509002] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.509008] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1e43630): datao=0, datal=4096, cccid=7 00:27:41.552 [2024-07-24 09:13:19.509016] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e92a00) on tqpair(0x1e43630): expected_datao=0, payload_size=4096 00:27:41.552 [2024-07-24 09:13:19.509023] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.509033] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.509040] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.509052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.552 [2024-07-24 09:13:19.509077] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.552 [2024-07-24 09:13:19.509083] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.552 [2024-07-24 09:13:19.509090] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92700) on tqpair=0x1e43630 00:27:41.552 [2024-07-24 09:13:19.509116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-24 09:13:19.509129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-24 09:13:19.509136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-24 09:13:19.509142] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92580) on tqpair=0x1e43630 00:27:41.553 [2024-07-24 09:13:19.509157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-24 09:13:19.509168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-24 09:13:19.509174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-24 09:13:19.509180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92880) on tqpair=0x1e43630 00:27:41.553 [2024-07-24 09:13:19.509190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.553 [2024-07-24 09:13:19.509200] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.553 [2024-07-24 09:13:19.509207] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.553 [2024-07-24 09:13:19.509213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a00) on tqpair=0x1e43630 00:27:41.553 ===================================================== 00:27:41.553 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:41.553 ===================================================== 00:27:41.553 Controller Capabilities/Features 00:27:41.553 ================================ 00:27:41.553 Vendor ID: 8086 00:27:41.553 Subsystem Vendor ID: 8086 00:27:41.553 Serial Number: SPDK00000000000001 00:27:41.553 Model Number: SPDK bdev Controller 00:27:41.553 Firmware Version: 24.09 00:27:41.553 Recommended Arb Burst: 6 00:27:41.553 IEEE OUI Identifier: e4 d2 5c 00:27:41.553 Multi-path I/O 00:27:41.553 May have multiple subsystem ports: Yes 00:27:41.553 May have multiple controllers: Yes 00:27:41.553 Associated with SR-IOV VF: No 00:27:41.553 Max Data Transfer Size: 131072 00:27:41.553 Max Number of Namespaces: 32 00:27:41.553 Max Number of I/O Queues: 127 00:27:41.553 NVMe Specification Version (VS): 1.3 00:27:41.553 NVMe Specification Version (Identify): 1.3 00:27:41.553 Maximum Queue Entries: 128 00:27:41.553 Contiguous Queues Required: Yes 00:27:41.553 Arbitration Mechanisms Supported 00:27:41.553 Weighted Round Robin: Not Supported 00:27:41.553 Vendor Specific: Not Supported 00:27:41.553 Reset Timeout: 15000 ms 00:27:41.553 Doorbell Stride: 4 bytes 00:27:41.553 NVM Subsystem Reset: Not Supported 00:27:41.553 Command Sets Supported 00:27:41.553 NVM Command Set: Supported 00:27:41.553 Boot Partition: Not Supported 00:27:41.553 Memory Page Size Minimum: 4096 bytes 00:27:41.553 Memory Page Size Maximum: 4096 bytes 00:27:41.553 Persistent Memory Region: Not Supported 00:27:41.553 Optional Asynchronous Events Supported 00:27:41.553 Namespace Attribute Notices: Supported 00:27:41.553 Firmware Activation Notices: Not Supported 00:27:41.553 ANA Change Notices: Not Supported 00:27:41.553 PLE Aggregate Log Change Notices: Not Supported 00:27:41.553 LBA Status Info Alert Notices: Not Supported 00:27:41.553 EGE Aggregate Log Change Notices: Not Supported 00:27:41.553 Normal NVM Subsystem Shutdown event: Not Supported 00:27:41.553 Zone Descriptor Change Notices: Not Supported 00:27:41.553 Discovery Log Change Notices: Not Supported 00:27:41.553 Controller Attributes 00:27:41.553 128-bit Host Identifier: Supported 00:27:41.553 Non-Operational Permissive Mode: Not Supported 00:27:41.553 NVM Sets: Not Supported 00:27:41.553 Read Recovery Levels: Not Supported 00:27:41.553 Endurance Groups: Not Supported 00:27:41.553 Predictable Latency Mode: Not Supported 00:27:41.553 Traffic Based Keep ALive: Not Supported 00:27:41.553 Namespace Granularity: Not Supported 00:27:41.553 SQ Associations: Not Supported 00:27:41.553 UUID List: Not Supported 00:27:41.553 Multi-Domain Subsystem: Not Supported 00:27:41.553 Fixed Capacity Management: Not Supported 00:27:41.553 Variable Capacity Management: Not Supported 00:27:41.553 Delete Endurance Group: Not Supported 00:27:41.553 Delete NVM Set: Not Supported 00:27:41.553 Extended LBA Formats Supported: Not Supported 00:27:41.553 Flexible Data Placement Supported: Not Supported 00:27:41.553 00:27:41.553 Controller Memory Buffer Support 00:27:41.553 ================================ 00:27:41.553 Supported: No 00:27:41.553 00:27:41.553 Persistent Memory Region Support 00:27:41.553 ================================ 00:27:41.553 Supported: No 00:27:41.553 00:27:41.553 Admin Command Set Attributes 00:27:41.553 ============================ 00:27:41.553 Security Send/Receive: Not Supported 00:27:41.553 Format NVM: Not Supported 00:27:41.553 Firmware Activate/Download: Not Supported 00:27:41.553 Namespace Management: Not Supported 00:27:41.553 Device Self-Test: Not Supported 00:27:41.553 Directives: Not Supported 00:27:41.553 NVMe-MI: Not Supported 00:27:41.553 Virtualization Management: Not Supported 00:27:41.553 Doorbell Buffer Config: Not Supported 00:27:41.553 Get LBA Status Capability: Not Supported 00:27:41.553 Command & Feature Lockdown Capability: Not Supported 00:27:41.553 Abort Command Limit: 4 00:27:41.553 Async Event Request Limit: 4 00:27:41.553 Number of Firmware Slots: N/A 00:27:41.553 Firmware Slot 1 Read-Only: N/A 00:27:41.553 Firmware Activation Without Reset: N/A 00:27:41.553 Multiple Update Detection Support: N/A 00:27:41.553 Firmware Update Granularity: No Information Provided 00:27:41.553 Per-Namespace SMART Log: No 00:27:41.553 Asymmetric Namespace Access Log Page: Not Supported 00:27:41.553 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:27:41.553 Command Effects Log Page: Supported 00:27:41.553 Get Log Page Extended Data: Supported 00:27:41.553 Telemetry Log Pages: Not Supported 00:27:41.553 Persistent Event Log Pages: Not Supported 00:27:41.553 Supported Log Pages Log Page: May Support 00:27:41.553 Commands Supported & Effects Log Page: Not Supported 00:27:41.553 Feature Identifiers & Effects Log Page:May Support 00:27:41.553 NVMe-MI Commands & Effects Log Page: May Support 00:27:41.553 Data Area 4 for Telemetry Log: Not Supported 00:27:41.553 Error Log Page Entries Supported: 128 00:27:41.553 Keep Alive: Supported 00:27:41.553 Keep Alive Granularity: 10000 ms 00:27:41.553 00:27:41.553 NVM Command Set Attributes 00:27:41.553 ========================== 00:27:41.553 Submission Queue Entry Size 00:27:41.553 Max: 64 00:27:41.553 Min: 64 00:27:41.553 Completion Queue Entry Size 00:27:41.553 Max: 16 00:27:41.553 Min: 16 00:27:41.553 Number of Namespaces: 32 00:27:41.553 Compare Command: Supported 00:27:41.553 Write Uncorrectable Command: Not Supported 00:27:41.553 Dataset Management Command: Supported 00:27:41.553 Write Zeroes Command: Supported 00:27:41.553 Set Features Save Field: Not Supported 00:27:41.553 Reservations: Supported 00:27:41.553 Timestamp: Not Supported 00:27:41.553 Copy: Supported 00:27:41.553 Volatile Write Cache: Present 00:27:41.553 Atomic Write Unit (Normal): 1 00:27:41.553 Atomic Write Unit (PFail): 1 00:27:41.553 Atomic Compare & Write Unit: 1 00:27:41.553 Fused Compare & Write: Supported 00:27:41.553 Scatter-Gather List 00:27:41.553 SGL Command Set: Supported 00:27:41.553 SGL Keyed: Supported 00:27:41.553 SGL Bit Bucket Descriptor: Not Supported 00:27:41.553 SGL Metadata Pointer: Not Supported 00:27:41.553 Oversized SGL: Not Supported 00:27:41.553 SGL Metadata Address: Not Supported 00:27:41.553 SGL Offset: Supported 00:27:41.553 Transport SGL Data Block: Not Supported 00:27:41.553 Replay Protected Memory Block: Not Supported 00:27:41.553 00:27:41.553 Firmware Slot Information 00:27:41.553 ========================= 00:27:41.553 Active slot: 1 00:27:41.553 Slot 1 Firmware Revision: 24.09 00:27:41.553 00:27:41.553 00:27:41.553 Commands Supported and Effects 00:27:41.553 ============================== 00:27:41.553 Admin Commands 00:27:41.553 -------------- 00:27:41.553 Get Log Page (02h): Supported 00:27:41.553 Identify (06h): Supported 00:27:41.553 Abort (08h): Supported 00:27:41.553 Set Features (09h): Supported 00:27:41.553 Get Features (0Ah): Supported 00:27:41.553 Asynchronous Event Request (0Ch): Supported 00:27:41.553 Keep Alive (18h): Supported 00:27:41.553 I/O Commands 00:27:41.553 ------------ 00:27:41.553 Flush (00h): Supported LBA-Change 00:27:41.553 Write (01h): Supported LBA-Change 00:27:41.554 Read (02h): Supported 00:27:41.554 Compare (05h): Supported 00:27:41.554 Write Zeroes (08h): Supported LBA-Change 00:27:41.554 Dataset Management (09h): Supported LBA-Change 00:27:41.554 Copy (19h): Supported LBA-Change 00:27:41.554 00:27:41.554 Error Log 00:27:41.554 ========= 00:27:41.554 00:27:41.554 Arbitration 00:27:41.554 =========== 00:27:41.554 Arbitration Burst: 1 00:27:41.554 00:27:41.554 Power Management 00:27:41.554 ================ 00:27:41.554 Number of Power States: 1 00:27:41.554 Current Power State: Power State #0 00:27:41.554 Power State #0: 00:27:41.554 Max Power: 0.00 W 00:27:41.554 Non-Operational State: Operational 00:27:41.554 Entry Latency: Not Reported 00:27:41.554 Exit Latency: Not Reported 00:27:41.554 Relative Read Throughput: 0 00:27:41.554 Relative Read Latency: 0 00:27:41.554 Relative Write Throughput: 0 00:27:41.554 Relative Write Latency: 0 00:27:41.554 Idle Power: Not Reported 00:27:41.554 Active Power: Not Reported 00:27:41.554 Non-Operational Permissive Mode: Not Supported 00:27:41.554 00:27:41.554 Health Information 00:27:41.554 ================== 00:27:41.554 Critical Warnings: 00:27:41.554 Available Spare Space: OK 00:27:41.554 Temperature: OK 00:27:41.554 Device Reliability: OK 00:27:41.554 Read Only: No 00:27:41.554 Volatile Memory Backup: OK 00:27:41.554 Current Temperature: 0 Kelvin (-273 Celsius) 00:27:41.554 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:27:41.554 Available Spare: 0% 00:27:41.554 Available Spare Threshold: 0% 00:27:41.554 Life Percentage Used:[2024-07-24 09:13:19.509324] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509335] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.509346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.509368] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92a00, cid 7, qid 0 00:27:41.554 [2024-07-24 09:13:19.509524] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.509540] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.509547] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92a00) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.509599] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:27:41.554 [2024-07-24 09:13:19.509618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e91f80) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.509628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.554 [2024-07-24 09:13:19.509637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92100) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.509645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.554 [2024-07-24 09:13:19.509653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92280) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.509664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.554 [2024-07-24 09:13:19.509673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.509680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:41.554 [2024-07-24 09:13:19.509709] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.509733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.509754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.554 [2024-07-24 09:13:19.509898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.509914] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.509921] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509927] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.509939] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509947] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.509953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.509964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.509991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.554 [2024-07-24 09:13:19.510114] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.510130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.510137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510144] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.510152] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:27:41.554 [2024-07-24 09:13:19.510160] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:27:41.554 [2024-07-24 09:13:19.510176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510185] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.510202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.510223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.554 [2024-07-24 09:13:19.510325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.510341] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.510348] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.510371] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510381] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510387] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.510401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.510423] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.554 [2024-07-24 09:13:19.510532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.510548] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.510555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510562] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.510578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.510604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.510625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.554 [2024-07-24 09:13:19.510721] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.510733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.510740] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510747] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.510763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.510789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.554 [2024-07-24 09:13:19.510809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.554 [2024-07-24 09:13:19.510908] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.554 [2024-07-24 09:13:19.510924] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.554 [2024-07-24 09:13:19.510930] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510937] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.554 [2024-07-24 09:13:19.510954] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.554 [2024-07-24 09:13:19.510970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.554 [2024-07-24 09:13:19.510980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.555 [2024-07-24 09:13:19.511001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.555 [2024-07-24 09:13:19.515108] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.555 [2024-07-24 09:13:19.515126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.555 [2024-07-24 09:13:19.515133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.555 [2024-07-24 09:13:19.515140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.555 [2024-07-24 09:13:19.515173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:27:41.555 [2024-07-24 09:13:19.515183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:27:41.555 [2024-07-24 09:13:19.515190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1e43630) 00:27:41.555 [2024-07-24 09:13:19.515200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:41.555 [2024-07-24 09:13:19.515227] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e92400, cid 3, qid 0 00:27:41.555 [2024-07-24 09:13:19.515366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:27:41.555 [2024-07-24 09:13:19.515381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:27:41.555 [2024-07-24 09:13:19.515388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:27:41.555 [2024-07-24 09:13:19.515395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e92400) on tqpair=0x1e43630 00:27:41.555 [2024-07-24 09:13:19.515408] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:27:41.555 0% 00:27:41.555 Data Units Read: 0 00:27:41.555 Data Units Written: 0 00:27:41.555 Host Read Commands: 0 00:27:41.555 Host Write Commands: 0 00:27:41.555 Controller Busy Time: 0 minutes 00:27:41.555 Power Cycles: 0 00:27:41.555 Power On Hours: 0 hours 00:27:41.555 Unsafe Shutdowns: 0 00:27:41.555 Unrecoverable Media Errors: 0 00:27:41.555 Lifetime Error Log Entries: 0 00:27:41.555 Warning Temperature Time: 0 minutes 00:27:41.555 Critical Temperature Time: 0 minutes 00:27:41.555 00:27:41.555 Number of Queues 00:27:41.555 ================ 00:27:41.555 Number of I/O Submission Queues: 127 00:27:41.555 Number of I/O Completion Queues: 127 00:27:41.555 00:27:41.555 Active Namespaces 00:27:41.555 ================= 00:27:41.555 Namespace ID:1 00:27:41.555 Error Recovery Timeout: Unlimited 00:27:41.555 Command Set Identifier: NVM (00h) 00:27:41.555 Deallocate: Supported 00:27:41.555 Deallocated/Unwritten Error: Not Supported 00:27:41.555 Deallocated Read Value: Unknown 00:27:41.555 Deallocate in Write Zeroes: Not Supported 00:27:41.555 Deallocated Guard Field: 0xFFFF 00:27:41.555 Flush: Supported 00:27:41.555 Reservation: Supported 00:27:41.555 Namespace Sharing Capabilities: Multiple Controllers 00:27:41.555 Size (in LBAs): 131072 (0GiB) 00:27:41.555 Capacity (in LBAs): 131072 (0GiB) 00:27:41.555 Utilization (in LBAs): 131072 (0GiB) 00:27:41.555 NGUID: ABCDEF0123456789ABCDEF0123456789 00:27:41.555 EUI64: ABCDEF0123456789 00:27:41.555 UUID: 71d7235a-312e-4268-bcd7-656bc7b4214e 00:27:41.555 Thin Provisioning: Not Supported 00:27:41.555 Per-NS Atomic Units: Yes 00:27:41.555 Atomic Boundary Size (Normal): 0 00:27:41.555 Atomic Boundary Size (PFail): 0 00:27:41.555 Atomic Boundary Offset: 0 00:27:41.555 Maximum Single Source Range Length: 65535 00:27:41.555 Maximum Copy Length: 65535 00:27:41.555 Maximum Source Range Count: 1 00:27:41.555 NGUID/EUI64 Never Reused: No 00:27:41.555 Namespace Write Protected: No 00:27:41.555 Number of LBA Formats: 1 00:27:41.555 Current LBA Format: LBA Format #00 00:27:41.555 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:41.555 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.555 rmmod nvme_tcp 00:27:41.555 rmmod nvme_fabrics 00:27:41.555 rmmod nvme_keyring 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3862419 ']' 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3862419 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3862419 ']' 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3862419 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3862419 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3862419' 00:27:41.555 killing process with pid 3862419 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3862419 00:27:41.555 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3862419 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.814 09:13:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.350 00:27:44.350 real 0m5.353s 00:27:44.350 user 0m4.210s 00:27:44.350 sys 0m1.825s 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:27:44.350 ************************************ 00:27:44.350 END TEST nvmf_identify 00:27:44.350 ************************************ 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.350 09:13:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.350 ************************************ 00:27:44.351 START TEST nvmf_perf 00:27:44.351 ************************************ 00:27:44.351 09:13:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:27:44.351 * Looking for test storage... 00:27:44.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.351 09:13:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:46.253 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:46.253 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:46.253 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:46.254 Found net devices under 0000:09:00.0: cvl_0_0 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:46.254 Found net devices under 0000:09:00.1: cvl_0_1 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:46.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:46.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:27:46.254 00:27:46.254 --- 10.0.0.2 ping statistics --- 00:27:46.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.254 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:46.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:46.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:27:46.254 00:27:46.254 --- 10.0.0.1 ping statistics --- 00:27:46.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:46.254 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3864440 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3864440 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3864440 ']' 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.254 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.254 [2024-07-24 09:13:24.220920] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:27:46.254 [2024-07-24 09:13:24.221008] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:46.254 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.254 [2024-07-24 09:13:24.258120] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:27:46.254 [2024-07-24 09:13:24.290490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:46.515 [2024-07-24 09:13:24.381238] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:46.515 [2024-07-24 09:13:24.381284] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:46.515 [2024-07-24 09:13:24.381300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:46.515 [2024-07-24 09:13:24.381313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:46.515 [2024-07-24 09:13:24.381326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:46.515 [2024-07-24 09:13:24.381662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:46.515 [2024-07-24 09:13:24.381722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:46.515 [2024-07-24 09:13:24.381842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:46.515 [2024-07-24 09:13:24.381981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:46.515 09:13:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:27:49.792 09:13:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:27:49.792 09:13:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:27:49.792 09:13:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:27:49.792 09:13:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:27:50.358 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:27:50.358 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:27:50.358 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:27:50.358 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:27:50.358 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:27:50.358 [2024-07-24 09:13:28.447638] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:50.358 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:50.615 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:50.615 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:50.873 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:27:50.873 09:13:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:51.131 09:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.388 [2024-07-24 09:13:29.427187] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.388 09:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:51.646 09:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:27:51.646 09:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:51.646 09:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:27:51.646 09:13:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:27:53.018 Initializing NVMe Controllers 00:27:53.018 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:27:53.018 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:27:53.018 Initialization complete. Launching workers. 00:27:53.018 ======================================================== 00:27:53.018 Latency(us) 00:27:53.018 Device Information : IOPS MiB/s Average min max 00:27:53.018 PCIE (0000:0b:00.0) NSID 1 from core 0: 83535.75 326.31 382.45 27.28 7505.71 00:27:53.018 ======================================================== 00:27:53.018 Total : 83535.75 326.31 382.45 27.28 7505.71 00:27:53.018 00:27:53.018 09:13:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:53.018 EAL: No free 2048 kB hugepages reported on node 1 00:27:54.391 Initializing NVMe Controllers 00:27:54.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:54.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:54.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:54.391 Initialization complete. Launching workers. 00:27:54.391 ======================================================== 00:27:54.391 Latency(us) 00:27:54.391 Device Information : IOPS MiB/s Average min max 00:27:54.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 88.00 0.34 11733.68 180.60 44771.97 00:27:54.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.00 0.21 19100.17 7923.79 50861.98 00:27:54.391 ======================================================== 00:27:54.391 Total : 142.00 0.55 14535.03 180.60 50861.98 00:27:54.391 00:27:54.391 09:13:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:54.391 EAL: No free 2048 kB hugepages reported on node 1 00:27:55.326 Initializing NVMe Controllers 00:27:55.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:55.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:55.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:55.326 Initialization complete. Launching workers. 00:27:55.326 ======================================================== 00:27:55.326 Latency(us) 00:27:55.326 Device Information : IOPS MiB/s Average min max 00:27:55.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8652.06 33.80 3698.90 638.18 7567.64 00:27:55.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3854.34 15.06 8335.14 5901.63 16034.60 00:27:55.326 ======================================================== 00:27:55.326 Total : 12506.39 48.85 5127.74 638.18 16034.60 00:27:55.326 00:27:55.326 09:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:27:55.326 09:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:27:55.326 09:13:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:55.326 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.857 Initializing NVMe Controllers 00:27:57.857 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:57.857 Controller IO queue size 128, less than required. 00:27:57.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:57.857 Controller IO queue size 128, less than required. 00:27:57.857 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:57.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:57.857 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:27:57.857 Initialization complete. Launching workers. 00:27:57.857 ======================================================== 00:27:57.857 Latency(us) 00:27:57.857 Device Information : IOPS MiB/s Average min max 00:27:57.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1406.99 351.75 92616.41 48146.31 121830.12 00:27:57.857 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 595.50 148.87 225211.03 133952.20 370945.71 00:27:57.857 ======================================================== 00:27:57.857 Total : 2002.49 500.62 132047.17 48146.31 370945.71 00:27:57.857 00:27:57.857 09:13:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:27:57.857 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.422 No valid NVMe controllers or AIO or URING devices found 00:27:58.422 Initializing NVMe Controllers 00:27:58.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:58.422 Controller IO queue size 128, less than required. 00:27:58.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.422 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:27:58.422 Controller IO queue size 128, less than required. 00:27:58.422 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:58.422 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:27:58.422 WARNING: Some requested NVMe devices were skipped 00:27:58.422 09:13:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:27:58.422 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.954 Initializing NVMe Controllers 00:28:00.954 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:00.954 Controller IO queue size 128, less than required. 00:28:00.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:00.954 Controller IO queue size 128, less than required. 00:28:00.954 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:00.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:00.954 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:00.954 Initialization complete. Launching workers. 00:28:00.954 00:28:00.954 ==================== 00:28:00.954 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:00.954 TCP transport: 00:28:00.954 polls: 37294 00:28:00.954 idle_polls: 12124 00:28:00.954 sock_completions: 25170 00:28:00.954 nvme_completions: 3573 00:28:00.954 submitted_requests: 5312 00:28:00.954 queued_requests: 1 00:28:00.954 00:28:00.954 ==================== 00:28:00.954 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:00.954 TCP transport: 00:28:00.954 polls: 34854 00:28:00.954 idle_polls: 10548 00:28:00.954 sock_completions: 24306 00:28:00.955 nvme_completions: 3549 00:28:00.955 submitted_requests: 5260 00:28:00.955 queued_requests: 1 00:28:00.955 ======================================================== 00:28:00.955 Latency(us) 00:28:00.955 Device Information : IOPS MiB/s Average min max 00:28:00.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 892.07 223.02 147520.31 89592.76 208604.78 00:28:00.955 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 886.08 221.52 147314.34 71251.79 233521.35 00:28:00.955 ======================================================== 00:28:00.955 Total : 1778.15 444.54 147417.67 71251.79 233521.35 00:28:00.955 00:28:00.955 09:13:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:00.955 09:13:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.955 09:13:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:00.955 09:13:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:0b:00.0 ']' 00:28:00.955 09:13:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=35539312-1c88-45b4-8805-5ee62bca70f5 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 35539312-1c88-45b4-8805-5ee62bca70f5 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_uuid=35539312-1c88-45b4-8805-5ee62bca70f5 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_info 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local fc 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local cs 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:28:05.167 { 00:28:05.167 "uuid": "35539312-1c88-45b4-8805-5ee62bca70f5", 00:28:05.167 "name": "lvs_0", 00:28:05.167 "base_bdev": "Nvme0n1", 00:28:05.167 "total_data_clusters": 238234, 00:28:05.167 "free_clusters": 238234, 00:28:05.167 "block_size": 512, 00:28:05.167 "cluster_size": 4194304 00:28:05.167 } 00:28:05.167 ]' 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="35539312-1c88-45b4-8805-5ee62bca70f5") .free_clusters' 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # fc=238234 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="35539312-1c88-45b4-8805-5ee62bca70f5") .cluster_size' 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # cs=4194304 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # free_mb=952936 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # echo 952936 00:28:05.167 952936 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:05.167 09:13:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 35539312-1c88-45b4-8805-5ee62bca70f5 lbd_0 20480 00:28:05.167 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=2ab36e20-9d0a-4e89-b066-14478c21e984 00:28:05.167 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2ab36e20-9d0a-4e89-b066-14478c21e984 lvs_n_0 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=12665df0-e693-43cb-ab4b-33e4f52d3e92 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 12665df0-e693-43cb-ab4b-33e4f52d3e92 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1362 -- # local lvs_uuid=12665df0-e693-43cb-ab4b-33e4f52d3e92 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1363 -- # local lvs_info 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local fc 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local cs 00:28:06.098 09:13:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:06.098 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:28:06.098 { 00:28:06.098 "uuid": "35539312-1c88-45b4-8805-5ee62bca70f5", 00:28:06.098 "name": "lvs_0", 00:28:06.098 "base_bdev": "Nvme0n1", 00:28:06.098 "total_data_clusters": 238234, 00:28:06.098 "free_clusters": 233114, 00:28:06.098 "block_size": 512, 00:28:06.098 "cluster_size": 4194304 00:28:06.098 }, 00:28:06.098 { 00:28:06.098 "uuid": "12665df0-e693-43cb-ab4b-33e4f52d3e92", 00:28:06.098 "name": "lvs_n_0", 00:28:06.098 "base_bdev": "2ab36e20-9d0a-4e89-b066-14478c21e984", 00:28:06.098 "total_data_clusters": 5114, 00:28:06.098 "free_clusters": 5114, 00:28:06.098 "block_size": 512, 00:28:06.098 "cluster_size": 4194304 00:28:06.098 } 00:28:06.098 ]' 00:28:06.098 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="12665df0-e693-43cb-ab4b-33e4f52d3e92") .free_clusters' 00:28:06.098 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # fc=5114 00:28:06.098 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="12665df0-e693-43cb-ab4b-33e4f52d3e92") .cluster_size' 00:28:06.355 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # cs=4194304 00:28:06.355 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # free_mb=20456 00:28:06.356 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # echo 20456 00:28:06.356 20456 00:28:06.356 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:06.356 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 12665df0-e693-43cb-ab4b-33e4f52d3e92 lbd_nest_0 20456 00:28:06.614 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=4523ccbb-592f-429b-a437-5893144ca90f 00:28:06.614 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:06.872 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:06.872 09:13:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 4523ccbb-592f-429b-a437-5893144ca90f 00:28:07.130 09:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:07.130 09:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:07.130 09:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:07.130 09:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:07.130 09:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:07.130 09:13:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:07.387 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.612 Initializing NVMe Controllers 00:28:19.612 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.612 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:19.612 Initialization complete. Launching workers. 00:28:19.612 ======================================================== 00:28:19.612 Latency(us) 00:28:19.612 Device Information : IOPS MiB/s Average min max 00:28:19.612 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.80 0.02 22409.23 197.58 45920.62 00:28:19.612 ======================================================== 00:28:19.612 Total : 44.80 0.02 22409.23 197.58 45920.62 00:28:19.612 00:28:19.612 09:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:19.612 09:13:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:19.612 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.570 Initializing NVMe Controllers 00:28:29.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:29.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:29.570 Initialization complete. Launching workers. 00:28:29.570 ======================================================== 00:28:29.570 Latency(us) 00:28:29.570 Device Information : IOPS MiB/s Average min max 00:28:29.570 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.39 10.55 11849.64 5025.87 47886.00 00:28:29.570 ======================================================== 00:28:29.570 Total : 84.39 10.55 11849.64 5025.87 47886.00 00:28:29.570 00:28:29.570 09:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:29.570 09:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:29.570 09:14:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:29.570 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.532 Initializing NVMe Controllers 00:28:39.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:39.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:39.532 Initialization complete. Launching workers. 00:28:39.532 ======================================================== 00:28:39.532 Latency(us) 00:28:39.532 Device Information : IOPS MiB/s Average min max 00:28:39.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6939.50 3.39 4610.83 332.56 12085.36 00:28:39.532 ======================================================== 00:28:39.532 Total : 6939.50 3.39 4610.83 332.56 12085.36 00:28:39.532 00:28:39.532 09:14:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:39.532 09:14:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:39.532 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.501 Initializing NVMe Controllers 00:28:49.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:49.501 Initialization complete. Launching workers. 00:28:49.501 ======================================================== 00:28:49.501 Latency(us) 00:28:49.501 Device Information : IOPS MiB/s Average min max 00:28:49.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2764.54 345.57 11578.04 628.49 28093.26 00:28:49.501 ======================================================== 00:28:49.501 Total : 2764.54 345.57 11578.04 628.49 28093.26 00:28:49.501 00:28:49.501 09:14:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:49.501 09:14:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:49.501 09:14:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:49.501 EAL: No free 2048 kB hugepages reported on node 1 00:28:59.501 Initializing NVMe Controllers 00:28:59.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:59.501 Controller IO queue size 128, less than required. 00:28:59.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:59.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:59.501 Initialization complete. Launching workers. 00:28:59.501 ======================================================== 00:28:59.501 Latency(us) 00:28:59.501 Device Information : IOPS MiB/s Average min max 00:28:59.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11776.35 5.75 10878.25 1735.18 25303.46 00:28:59.501 ======================================================== 00:28:59.501 Total : 11776.35 5.75 10878.25 1735.18 25303.46 00:28:59.501 00:28:59.501 09:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:59.501 09:14:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:59.501 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.468 Initializing NVMe Controllers 00:29:09.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:09.468 Controller IO queue size 128, less than required. 00:29:09.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:09.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:09.468 Initialization complete. Launching workers. 00:29:09.468 ======================================================== 00:29:09.468 Latency(us) 00:29:09.468 Device Information : IOPS MiB/s Average min max 00:29:09.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1224.10 153.01 105426.11 23057.84 215778.20 00:29:09.468 ======================================================== 00:29:09.468 Total : 1224.10 153.01 105426.11 23057.84 215778.20 00:29:09.468 00:29:09.468 09:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:09.726 09:14:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4523ccbb-592f-429b-a437-5893144ca90f 00:29:10.659 09:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:10.659 09:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2ab36e20-9d0a-4e89-b066-14478c21e984 00:29:10.916 09:14:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:11.174 rmmod nvme_tcp 00:29:11.174 rmmod nvme_fabrics 00:29:11.174 rmmod nvme_keyring 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3864440 ']' 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3864440 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3864440 ']' 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3864440 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:11.174 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3864440 00:29:11.432 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:11.432 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:11.432 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3864440' 00:29:11.432 killing process with pid 3864440 00:29:11.432 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3864440 00:29:11.432 09:14:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3864440 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.803 09:14:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:15.336 00:29:15.336 real 1m30.902s 00:29:15.336 user 5m26.633s 00:29:15.336 sys 0m17.192s 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:15.336 ************************************ 00:29:15.336 END TEST nvmf_perf 00:29:15.336 ************************************ 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.336 ************************************ 00:29:15.336 START TEST nvmf_fio_host 00:29:15.336 ************************************ 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:15.336 * Looking for test storage... 00:29:15.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:15.336 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.337 09:14:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.337 09:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:15.337 09:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:15.337 09:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:15.337 09:14:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:17.238 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:17.238 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:17.238 Found net devices under 0000:09:00.0: cvl_0_0 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:17.238 Found net devices under 0000:09:00.1: cvl_0_1 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:17.238 09:14:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:17.238 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:17.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:29:17.239 00:29:17.239 --- 10.0.0.2 ping statistics --- 00:29:17.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.239 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:17.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:29:17.239 00:29:17.239 --- 10.0.0.1 ping statistics --- 00:29:17.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.239 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3876395 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3876395 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3876395 ']' 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.239 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.239 [2024-07-24 09:14:55.128055] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:29:17.239 [2024-07-24 09:14:55.128157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.239 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.239 [2024-07-24 09:14:55.168040] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:17.239 [2024-07-24 09:14:55.199899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:17.239 [2024-07-24 09:14:55.293387] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.239 [2024-07-24 09:14:55.293460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.239 [2024-07-24 09:14:55.293476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.239 [2024-07-24 09:14:55.293490] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.239 [2024-07-24 09:14:55.293502] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.239 [2024-07-24 09:14:55.297128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.239 [2024-07-24 09:14:55.297191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:17.239 [2024-07-24 09:14:55.301145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:17.239 [2024-07-24 09:14:55.301149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.497 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.497 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:17.497 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:17.754 [2024-07-24 09:14:55.663004] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.754 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:17.754 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:17.754 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.754 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:18.012 Malloc1 00:29:18.012 09:14:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:18.270 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:18.527 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:18.784 [2024-07-24 09:14:56.694774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.784 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:19.041 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:19.042 09:14:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:19.299 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:19.299 fio-3.35 00:29:19.299 Starting 1 thread 00:29:19.299 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.824 00:29:21.824 test: (groupid=0, jobs=1): err= 0: pid=3876751: Wed Jul 24 09:14:59 2024 00:29:21.824 read: IOPS=9143, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2007msec) 00:29:21.824 slat (nsec): min=1934, max=159636, avg=2517.52, stdev=1936.03 00:29:21.824 clat (usec): min=2582, max=13121, avg=7711.76, stdev=614.65 00:29:21.824 lat (usec): min=2611, max=13123, avg=7714.28, stdev=614.55 00:29:21.824 clat percentiles (usec): 00:29:21.824 | 1.00th=[ 6325], 5.00th=[ 6783], 10.00th=[ 6980], 20.00th=[ 7242], 00:29:21.824 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 7832], 00:29:21.824 | 70.00th=[ 8029], 80.00th=[ 8160], 90.00th=[ 8455], 95.00th=[ 8586], 00:29:21.824 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11994], 99.95th=[12649], 00:29:21.824 | 99.99th=[13042] 00:29:21.824 bw ( KiB/s): min=35944, max=36968, per=99.94%, avg=36554.00, stdev=435.85, samples=4 00:29:21.824 iops : min= 8986, max= 9242, avg=9138.50, stdev=108.96, samples=4 00:29:21.824 write: IOPS=9150, BW=35.7MiB/s (37.5MB/s)(71.7MiB/2007msec); 0 zone resets 00:29:21.824 slat (usec): min=2, max=140, avg= 2.67, stdev= 1.43 00:29:21.825 clat (usec): min=1422, max=12405, avg=6242.76, stdev=519.69 00:29:21.825 lat (usec): min=1431, max=12407, avg=6245.43, stdev=519.63 00:29:21.825 clat percentiles (usec): 00:29:21.825 | 1.00th=[ 5080], 5.00th=[ 5473], 10.00th=[ 5669], 20.00th=[ 5866], 00:29:21.825 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6259], 60.00th=[ 6390], 00:29:21.825 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 6980], 00:29:21.825 | 99.00th=[ 7308], 99.50th=[ 7439], 99.90th=[ 9765], 99.95th=[11600], 00:29:21.825 | 99.99th=[12387] 00:29:21.825 bw ( KiB/s): min=36384, max=36800, per=100.00%, avg=36628.00, stdev=180.72, samples=4 00:29:21.825 iops : min= 9096, max= 9200, avg=9157.00, stdev=45.18, samples=4 00:29:21.825 lat (msec) : 2=0.02%, 4=0.12%, 10=99.74%, 20=0.13% 00:29:21.825 cpu : usr=60.12%, sys=35.39%, ctx=89, majf=0, minf=40 00:29:21.825 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:21.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:21.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:21.825 issued rwts: total=18352,18366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:21.825 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:21.825 00:29:21.825 Run status group 0 (all jobs): 00:29:21.825 READ: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2007-2007msec 00:29:21.825 WRITE: bw=35.7MiB/s (37.5MB/s), 35.7MiB/s-35.7MiB/s (37.5MB/s-37.5MB/s), io=71.7MiB (75.2MB), run=2007-2007msec 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:21.825 09:14:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:21.825 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:21.825 fio-3.35 00:29:21.825 Starting 1 thread 00:29:21.825 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.352 00:29:24.352 test: (groupid=0, jobs=1): err= 0: pid=3877218: Wed Jul 24 09:15:02 2024 00:29:24.352 read: IOPS=8002, BW=125MiB/s (131MB/s)(251MiB/2009msec) 00:29:24.352 slat (nsec): min=2871, max=96162, avg=4120.84, stdev=2148.28 00:29:24.352 clat (usec): min=3199, max=17245, avg=9313.24, stdev=2106.49 00:29:24.352 lat (usec): min=3203, max=17248, avg=9317.36, stdev=2106.49 00:29:24.352 clat percentiles (usec): 00:29:24.352 | 1.00th=[ 4948], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 7439], 00:29:24.352 | 30.00th=[ 8094], 40.00th=[ 8717], 50.00th=[ 9241], 60.00th=[ 9896], 00:29:24.352 | 70.00th=[10421], 80.00th=[10945], 90.00th=[12125], 95.00th=[13173], 00:29:24.352 | 99.00th=[14091], 99.50th=[14615], 99.90th=[15401], 99.95th=[15926], 00:29:24.352 | 99.99th=[16188] 00:29:24.352 bw ( KiB/s): min=57536, max=75072, per=51.33%, avg=65728.00, stdev=8316.88, samples=4 00:29:24.352 iops : min= 3596, max= 4692, avg=4108.00, stdev=519.81, samples=4 00:29:24.352 write: IOPS=4731, BW=73.9MiB/s (77.5MB/s)(135MiB/1824msec); 0 zone resets 00:29:24.352 slat (usec): min=31, max=194, avg=36.32, stdev= 6.60 00:29:24.352 clat (usec): min=4420, max=19157, avg=11830.89, stdev=2229.85 00:29:24.352 lat (usec): min=4457, max=19194, avg=11867.21, stdev=2230.04 00:29:24.352 clat percentiles (usec): 00:29:24.352 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9896], 00:29:24.352 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:29:24.352 | 70.00th=[12780], 80.00th=[13829], 90.00th=[15008], 95.00th=[16057], 00:29:24.352 | 99.00th=[17433], 99.50th=[18220], 99.90th=[18744], 99.95th=[19006], 00:29:24.352 | 99.99th=[19268] 00:29:24.352 bw ( KiB/s): min=60000, max=78656, per=90.46%, avg=68488.00, stdev=8990.10, samples=4 00:29:24.352 iops : min= 3750, max= 4916, avg=4280.50, stdev=561.88, samples=4 00:29:24.352 lat (msec) : 4=0.12%, 10=46.89%, 20=52.99% 00:29:24.352 cpu : usr=72.27%, sys=24.94%, ctx=30, majf=0, minf=64 00:29:24.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:29:24.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:24.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:24.352 issued rwts: total=16077,8631,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:24.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:24.352 00:29:24.352 Run status group 0 (all jobs): 00:29:24.352 READ: bw=125MiB/s (131MB/s), 125MiB/s-125MiB/s (131MB/s-131MB/s), io=251MiB (263MB), run=2009-2009msec 00:29:24.352 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=135MiB (141MB), run=1824-1824msec 00:29:24.352 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:24.352 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:24.352 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:24.352 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:24.353 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # bdfs=() 00:29:24.353 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1511 -- # local bdfs 00:29:24.353 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:24.353 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:24.353 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:29:24.609 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:29:24.610 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:29:24.610 09:15:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 -i 10.0.0.2 00:29:27.930 Nvme0n1 00:29:27.930 09:15:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=3dd148b9-02a8-4ed4-981c-22c5ec6812cf 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 3dd148b9-02a8-4ed4-981c-22c5ec6812cf 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_uuid=3dd148b9-02a8-4ed4-981c-22c5ec6812cf 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_info 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local fc 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local cs 00:29:30.458 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:30.716 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:29:30.716 { 00:29:30.716 "uuid": "3dd148b9-02a8-4ed4-981c-22c5ec6812cf", 00:29:30.716 "name": "lvs_0", 00:29:30.716 "base_bdev": "Nvme0n1", 00:29:30.716 "total_data_clusters": 930, 00:29:30.716 "free_clusters": 930, 00:29:30.716 "block_size": 512, 00:29:30.716 "cluster_size": 1073741824 00:29:30.716 } 00:29:30.716 ]' 00:29:30.716 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="3dd148b9-02a8-4ed4-981c-22c5ec6812cf") .free_clusters' 00:29:30.716 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # fc=930 00:29:30.716 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="3dd148b9-02a8-4ed4-981c-22c5ec6812cf") .cluster_size' 00:29:30.973 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # cs=1073741824 00:29:30.973 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # free_mb=952320 00:29:30.973 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # echo 952320 00:29:30.973 952320 00:29:30.973 09:15:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:31.231 13ba4cb4-143f-49eb-8eaa-f4b8b8acc747 00:29:31.231 09:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:31.489 09:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:31.747 09:15:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:32.004 09:15:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:32.262 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:32.262 fio-3.35 00:29:32.262 Starting 1 thread 00:29:32.262 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.787 00:29:34.787 test: (groupid=0, jobs=1): err= 0: pid=3879097: Wed Jul 24 09:15:12 2024 00:29:34.787 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(47.9MiB/2008msec) 00:29:34.788 slat (usec): min=2, max=144, avg= 2.71, stdev= 2.17 00:29:34.788 clat (usec): min=1222, max=171014, avg=11544.98, stdev=11535.93 00:29:34.788 lat (usec): min=1225, max=171051, avg=11547.69, stdev=11536.18 00:29:34.788 clat percentiles (msec): 00:29:34.788 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:29:34.788 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:29:34.788 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:29:34.788 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:29:34.788 | 99.99th=[ 171] 00:29:34.788 bw ( KiB/s): min=17224, max=27040, per=99.83%, avg=24364.00, stdev=4765.00, samples=4 00:29:34.788 iops : min= 4306, max= 6760, avg=6091.00, stdev=1191.25, samples=4 00:29:34.788 write: IOPS=6079, BW=23.7MiB/s (24.9MB/s)(47.7MiB/2008msec); 0 zone resets 00:29:34.788 slat (usec): min=2, max=119, avg= 2.83, stdev= 1.69 00:29:34.788 clat (usec): min=340, max=169315, avg=9326.90, stdev=10855.61 00:29:34.788 lat (usec): min=342, max=169321, avg=9329.73, stdev=10855.86 00:29:34.788 clat percentiles (msec): 00:29:34.788 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:29:34.788 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:29:34.788 | 70.00th=[ 9], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 10], 00:29:34.788 | 99.00th=[ 11], 99.50th=[ 17], 99.90th=[ 169], 99.95th=[ 169], 00:29:34.788 | 99.99th=[ 169] 00:29:34.788 bw ( KiB/s): min=18216, max=26368, per=99.93%, avg=24300.00, stdev=4056.39, samples=4 00:29:34.788 iops : min= 4554, max= 6592, avg=6075.00, stdev=1014.10, samples=4 00:29:34.788 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:29:34.788 lat (msec) : 2=0.02%, 4=0.13%, 10=58.26%, 20=41.04%, 250=0.52% 00:29:34.788 cpu : usr=56.95%, sys=39.81%, ctx=99, majf=0, minf=40 00:29:34.788 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:34.788 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:34.788 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:34.788 issued rwts: total=12252,12207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:34.788 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:34.788 00:29:34.788 Run status group 0 (all jobs): 00:29:34.788 READ: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.9MiB (50.2MB), run=2008-2008msec 00:29:34.788 WRITE: bw=23.7MiB/s (24.9MB/s), 23.7MiB/s-23.7MiB/s (24.9MB/s-24.9MB/s), io=47.7MiB (50.0MB), run=2008-2008msec 00:29:34.788 09:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:35.046 09:15:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:35.977 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=7f5e7b08-5162-45e3-b0cf-a080848cad1f 00:29:35.978 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 7f5e7b08-5162-45e3-b0cf-a080848cad1f 00:29:35.978 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # local lvs_uuid=7f5e7b08-5162-45e3-b0cf-a080848cad1f 00:29:35.978 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1363 -- # local lvs_info 00:29:35.978 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local fc 00:29:35.978 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local cs 00:29:35.978 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:36.235 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # lvs_info='[ 00:29:36.235 { 00:29:36.235 "uuid": "3dd148b9-02a8-4ed4-981c-22c5ec6812cf", 00:29:36.235 "name": "lvs_0", 00:29:36.235 "base_bdev": "Nvme0n1", 00:29:36.235 "total_data_clusters": 930, 00:29:36.235 "free_clusters": 0, 00:29:36.235 "block_size": 512, 00:29:36.235 "cluster_size": 1073741824 00:29:36.235 }, 00:29:36.235 { 00:29:36.235 "uuid": "7f5e7b08-5162-45e3-b0cf-a080848cad1f", 00:29:36.235 "name": "lvs_n_0", 00:29:36.235 "base_bdev": "13ba4cb4-143f-49eb-8eaa-f4b8b8acc747", 00:29:36.235 "total_data_clusters": 237847, 00:29:36.235 "free_clusters": 237847, 00:29:36.235 "block_size": 512, 00:29:36.235 "cluster_size": 4194304 00:29:36.235 } 00:29:36.235 ]' 00:29:36.235 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # jq '.[] | select(.uuid=="7f5e7b08-5162-45e3-b0cf-a080848cad1f") .free_clusters' 00:29:36.235 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # fc=237847 00:29:36.235 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # jq '.[] | select(.uuid=="7f5e7b08-5162-45e3-b0cf-a080848cad1f") .cluster_size' 00:29:36.492 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # cs=4194304 00:29:36.492 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # free_mb=951388 00:29:36.492 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # echo 951388 00:29:36.492 951388 00:29:36.492 09:15:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:29:37.056 00171144-e5c3-430a-8899-578aaf98128c 00:29:37.056 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:29:37.313 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:29:37.572 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:37.830 09:15:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:38.087 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:38.087 fio-3.35 00:29:38.087 Starting 1 thread 00:29:38.087 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.622 00:29:40.622 test: (groupid=0, jobs=1): err= 0: pid=3879834: Wed Jul 24 09:15:18 2024 00:29:40.622 read: IOPS=5705, BW=22.3MiB/s (23.4MB/s)(44.8MiB/2010msec) 00:29:40.622 slat (usec): min=2, max=156, avg= 2.82, stdev= 2.57 00:29:40.622 clat (usec): min=4513, max=19580, avg=12346.20, stdev=1064.00 00:29:40.622 lat (usec): min=4518, max=19582, avg=12349.02, stdev=1063.90 00:29:40.622 clat percentiles (usec): 00:29:40.622 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11076], 20.00th=[11469], 00:29:40.622 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:29:40.622 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[13960], 00:29:40.622 | 99.00th=[14746], 99.50th=[15008], 99.90th=[17171], 99.95th=[18220], 00:29:40.622 | 99.99th=[18482] 00:29:40.622 bw ( KiB/s): min=21624, max=23704, per=99.98%, avg=22818.00, stdev=903.29, samples=4 00:29:40.622 iops : min= 5406, max= 5926, avg=5704.50, stdev=225.82, samples=4 00:29:40.622 write: IOPS=5687, BW=22.2MiB/s (23.3MB/s)(44.7MiB/2010msec); 0 zone resets 00:29:40.622 slat (usec): min=2, max=108, avg= 2.98, stdev= 1.78 00:29:40.622 clat (usec): min=2178, max=18559, avg=9923.68, stdev=959.47 00:29:40.622 lat (usec): min=2185, max=18562, avg=9926.66, stdev=959.44 00:29:40.622 clat percentiles (usec): 00:29:40.622 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9241], 00:29:40.622 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10159], 00:29:40.622 | 70.00th=[10290], 80.00th=[10683], 90.00th=[10945], 95.00th=[11338], 00:29:40.622 | 99.00th=[11994], 99.50th=[12387], 99.90th=[17171], 99.95th=[18482], 00:29:40.622 | 99.99th=[18482] 00:29:40.622 bw ( KiB/s): min=22592, max=22912, per=99.90%, avg=22728.00, stdev=137.48, samples=4 00:29:40.622 iops : min= 5648, max= 5728, avg=5682.00, stdev=34.37, samples=4 00:29:40.622 lat (msec) : 4=0.05%, 10=27.47%, 20=72.48% 00:29:40.622 cpu : usr=56.65%, sys=40.27%, ctx=102, majf=0, minf=40 00:29:40.622 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:40.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:40.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:40.622 issued rwts: total=11468,11432,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:40.622 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:40.622 00:29:40.622 Run status group 0 (all jobs): 00:29:40.622 READ: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.8MiB (47.0MB), run=2010-2010msec 00:29:40.622 WRITE: bw=22.2MiB/s (23.3MB/s), 22.2MiB/s-22.2MiB/s (23.3MB/s-23.3MB/s), io=44.7MiB (46.8MB), run=2010-2010msec 00:29:40.622 09:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:29:40.622 09:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:29:40.622 09:15:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:29:44.802 09:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:44.802 09:15:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:29:48.110 09:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:48.110 09:15:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:50.043 rmmod nvme_tcp 00:29:50.043 rmmod nvme_fabrics 00:29:50.043 rmmod nvme_keyring 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3876395 ']' 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3876395 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3876395 ']' 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3876395 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3876395 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3876395' 00:29:50.043 killing process with pid 3876395 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3876395 00:29:50.043 09:15:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3876395 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.043 09:15:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:52.591 00:29:52.591 real 0m37.191s 00:29:52.591 user 2m22.048s 00:29:52.591 sys 0m7.360s 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.591 ************************************ 00:29:52.591 END TEST nvmf_fio_host 00:29:52.591 ************************************ 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:52.591 ************************************ 00:29:52.591 START TEST nvmf_failover 00:29:52.591 ************************************ 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:52.591 * Looking for test storage... 00:29:52.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:29:52.591 09:15:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:29:54.493 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:54.494 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:54.494 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:54.494 Found net devices under 0000:09:00.0: cvl_0_0 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:54.494 Found net devices under 0000:09:00.1: cvl_0_1 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:54.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:29:54.494 00:29:54.494 --- 10.0.0.2 ping statistics --- 00:29:54.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.494 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:54.494 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:54.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:29:54.494 00:29:54.494 --- 10.0.0.1 ping statistics --- 00:29:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.495 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3883077 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3883077 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3883077 ']' 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:54.495 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.495 [2024-07-24 09:15:32.439930] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:29:54.495 [2024-07-24 09:15:32.440011] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.495 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.495 [2024-07-24 09:15:32.478315] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:29:54.495 [2024-07-24 09:15:32.508477] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.495 [2024-07-24 09:15:32.598886] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.495 [2024-07-24 09:15:32.598934] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.495 [2024-07-24 09:15:32.598947] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.495 [2024-07-24 09:15:32.598966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.495 [2024-07-24 09:15:32.598975] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.495 [2024-07-24 09:15:32.599126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.495 [2024-07-24 09:15:32.599164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.495 [2024-07-24 09:15:32.599167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.753 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:54.754 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:54.754 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:54.754 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:54.754 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:54.754 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.754 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:55.012 [2024-07-24 09:15:32.954834] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.012 09:15:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:55.270 Malloc0 00:29:55.270 09:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:55.527 09:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.785 09:15:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.042 [2024-07-24 09:15:34.085441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.042 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:56.299 [2024-07-24 09:15:34.330148] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:56.299 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:29:56.558 [2024-07-24 09:15:34.615109] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3883371 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3883371 /var/tmp/bdevperf.sock 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3883371 ']' 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:56.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:56.558 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:29:57.124 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:57.124 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:29:57.124 09:15:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:57.382 NVMe0n1 00:29:57.382 09:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:57.641 00:29:57.641 09:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3883502 00:29:57.641 09:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:57.641 09:15:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:29:58.574 09:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.835 [2024-07-24 09:15:36.869892] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.835 [2024-07-24 09:15:36.869979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.835 [2024-07-24 09:15:36.869996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.835 [2024-07-24 09:15:36.870009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.835 [2024-07-24 09:15:36.870021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.835 [2024-07-24 09:15:36.870033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.835 [2024-07-24 09:15:36.870045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870780] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870839] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870851] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870863] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.870999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871160] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 [2024-07-24 09:15:36.871195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477480 is same with the state(5) to be set 00:29:58.836 09:15:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:02.120 09:15:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.377 00:30:02.377 09:15:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:02.635 [2024-07-24 09:15:40.530441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 [2024-07-24 09:15:40.530505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 [2024-07-24 09:15:40.530521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 [2024-07-24 09:15:40.530534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 [2024-07-24 09:15:40.530546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 [2024-07-24 09:15:40.530558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 [2024-07-24 09:15:40.530584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478250 is same with the state(5) to be set 00:30:02.635 09:15:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:05.911 09:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.911 [2024-07-24 09:15:43.786740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.911 09:15:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:06.843 09:15:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:07.100 09:15:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3883502 00:30:13.673 0 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3883371 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3883371 ']' 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3883371 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883371 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883371' 00:30:13.673 killing process with pid 3883371 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3883371 00:30:13.673 09:15:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3883371 00:30:13.673 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:13.673 [2024-07-24 09:15:34.678769] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:30:13.673 [2024-07-24 09:15:34.678855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3883371 ] 00:30:13.673 EAL: No free 2048 kB hugepages reported on node 1 00:30:13.673 [2024-07-24 09:15:34.712886] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:13.673 [2024-07-24 09:15:34.754013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.673 [2024-07-24 09:15:34.844782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:13.673 Running I/O for 15 seconds... 00:30:13.673 [2024-07-24 09:15:36.872396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.872986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.872999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.873013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.673 [2024-07-24 09:15:36.873027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.673 [2024-07-24 09:15:36.873041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.674 [2024-07-24 09:15:36.873424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.674 [2024-07-24 09:15:36.873454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.873972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.873986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.674 [2024-07-24 09:15:36.874228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.674 [2024-07-24 09:15:36.874243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.675 [2024-07-24 09:15:36.874257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.675 [2024-07-24 09:15:36.874289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.675 [2024-07-24 09:15:36.874319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.675 [2024-07-24 09:15:36.874348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.675 [2024-07-24 09:15:36.874377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.874980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.874994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.675 [2024-07-24 09:15:36.875393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.675 [2024-07-24 09:15:36.875408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.676 [2024-07-24 09:15:36.875811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.875862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76600 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.875875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.875907] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.875918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76608 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.875931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.875955] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.875967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76616 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.875979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.875993] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76624 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876041] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76632 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76640 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76648 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76656 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76664 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76672 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76680 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76688 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876455] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76696 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76704 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.676 [2024-07-24 09:15:36.876539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.676 [2024-07-24 09:15:36.876550] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.676 [2024-07-24 09:15:36.876561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76712 len:8 PRP1 0x0 PRP2 0x0 00:30:13.676 [2024-07-24 09:15:36.876573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:36.876590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.677 [2024-07-24 09:15:36.876602] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.677 [2024-07-24 09:15:36.876613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76720 len:8 PRP1 0x0 PRP2 0x0 00:30:13.677 [2024-07-24 09:15:36.876626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:36.876684] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16c4cb0 was disconnected and freed. reset controller. 00:30:13.677 [2024-07-24 09:15:36.876702] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:13.677 [2024-07-24 09:15:36.876736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.677 [2024-07-24 09:15:36.876755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:36.876770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.677 [2024-07-24 09:15:36.876783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:36.876796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.677 [2024-07-24 09:15:36.876809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:36.876823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.677 [2024-07-24 09:15:36.876836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:36.876849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.677 [2024-07-24 09:15:36.880111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.677 [2024-07-24 09:15:36.880148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d1850 (9): Bad file descriptor 00:30:13.677 [2024-07-24 09:15:37.070895] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.677 [2024-07-24 09:15:40.531183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.677 [2024-07-24 09:15:40.531519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.677 [2024-07-24 09:15:40.531923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.677 [2024-07-24 09:15:40.531937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.531952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.531965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.531980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.531994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.532909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.532944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.532974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.532993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.533008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:113856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.533038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.533067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.533100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.678 [2024-07-24 09:15:40.533138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.533170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.678 [2024-07-24 09:15:40.533200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.678 [2024-07-24 09:15:40.533216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.679 [2024-07-24 09:15:40.533633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533664] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.533685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.533699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.533899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.533912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114632 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.533927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.533943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.533956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.533967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114640 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.533985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114648 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534061] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114656 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114664 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114672 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114680 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114688 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534321] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114696 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114704 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114712 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114720 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114728 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.679 [2024-07-24 09:15:40.534601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.679 [2024-07-24 09:15:40.534612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114736 len:8 PRP1 0x0 PRP2 0x0 00:30:13.679 [2024-07-24 09:15:40.534625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.679 [2024-07-24 09:15:40.534639] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114744 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534700] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114752 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114760 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114768 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114776 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534892] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114784 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.534952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.534963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114792 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.534976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.534989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535000] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114800 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535049] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114808 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114816 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114824 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114832 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114840 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535309] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113888 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113896 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113904 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113912 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535503] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535514] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113920 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535551] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113928 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535615] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113936 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113944 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113952 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535758] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.680 [2024-07-24 09:15:40.535780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113960 len:8 PRP1 0x0 PRP2 0x0 00:30:13.680 [2024-07-24 09:15:40.535793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.680 [2024-07-24 09:15:40.535807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.680 [2024-07-24 09:15:40.535818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.535830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113968 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.535843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.535856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.535868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.535879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113976 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.535892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.535906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.535917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.535929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113984 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.535942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.535955] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.535966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.535981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113992 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.535995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114000 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114008 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114016 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114024 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536225] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536236] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114032 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536284] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114040 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536322] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114048 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114056 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114064 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:114072 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536543] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114080 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536580] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114088 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536640] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114096 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114104 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536742] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114112 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536794] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114120 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114128 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114136 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.681 [2024-07-24 09:15:40.536960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113824 len:8 PRP1 0x0 PRP2 0x0 00:30:13.681 [2024-07-24 09:15:40.536973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.681 [2024-07-24 09:15:40.536986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.681 [2024-07-24 09:15:40.536997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114144 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114152 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114160 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114168 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114176 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537249] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114184 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537303] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114192 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114200 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114208 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114216 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537499] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114224 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114232 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114240 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114248 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114256 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114264 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114272 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114280 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537917] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114288 len:8 PRP1 0x0 PRP2 0x0 00:30:13.682 [2024-07-24 09:15:40.537941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.682 [2024-07-24 09:15:40.537958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.682 [2024-07-24 09:15:40.537970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.682 [2024-07-24 09:15:40.537981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114296 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.537994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114304 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114312 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114320 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114328 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114336 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114344 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114352 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114360 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538428] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114368 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114376 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538519] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114384 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538572] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114392 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.538620] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.538634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.538646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114400 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.538658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.544772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.544805] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.544819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114408 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.544833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.544847] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.544864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.544877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114416 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.544889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.544903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.544914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.544925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114424 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.544938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.544951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.544961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.544973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114432 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.544985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.544998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.545010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.545021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114440 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.545033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.545046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.545057] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.545068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114448 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.545080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.545093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.545113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.545126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114456 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.545139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.545153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.545164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.545175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114464 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.545187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.545200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.683 [2024-07-24 09:15:40.545210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.683 [2024-07-24 09:15:40.545221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114472 len:8 PRP1 0x0 PRP2 0x0 00:30:13.683 [2024-07-24 09:15:40.545234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.683 [2024-07-24 09:15:40.545251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545262] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114480 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114488 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113832 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113840 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113848 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113856 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113864 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545583] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113872 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113880 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545696] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114496 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545732] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114504 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114512 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114520 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545874] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114528 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545932] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114536 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.545955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.545968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.545978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.545993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114544 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546019] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114552 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114560 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114568 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114576 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114584 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546265] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114592 len:8 PRP1 0x0 PRP2 0x0 00:30:13.684 [2024-07-24 09:15:40.546299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.684 [2024-07-24 09:15:40.546312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.684 [2024-07-24 09:15:40.546322] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.684 [2024-07-24 09:15:40.546333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114600 len:8 PRP1 0x0 PRP2 0x0 00:30:13.685 [2024-07-24 09:15:40.546345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.685 [2024-07-24 09:15:40.546373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.685 [2024-07-24 09:15:40.546385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114608 len:8 PRP1 0x0 PRP2 0x0 00:30:13.685 [2024-07-24 09:15:40.546397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.685 [2024-07-24 09:15:40.546421] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.685 [2024-07-24 09:15:40.546432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114616 len:8 PRP1 0x0 PRP2 0x0 00:30:13.685 [2024-07-24 09:15:40.546445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.685 [2024-07-24 09:15:40.546469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.685 [2024-07-24 09:15:40.546480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114624 len:8 PRP1 0x0 PRP2 0x0 00:30:13.685 [2024-07-24 09:15:40.546492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546555] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f5330 was disconnected and freed. reset controller. 00:30:13.685 [2024-07-24 09:15:40.546573] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:13.685 [2024-07-24 09:15:40.546612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.685 [2024-07-24 09:15:40.546631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.685 [2024-07-24 09:15:40.546660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.685 [2024-07-24 09:15:40.546687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.685 [2024-07-24 09:15:40.546714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:40.546727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.685 [2024-07-24 09:15:40.546769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d1850 (9): Bad file descriptor 00:30:13.685 [2024-07-24 09:15:40.550041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.685 [2024-07-24 09:15:40.715666] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.685 [2024-07-24 09:15:45.085609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.085974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.085989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.685 [2024-07-24 09:15:45.086466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.685 [2024-07-24 09:15:45.086480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.086981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.086995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.686 [2024-07-24 09:15:45.087650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.686 [2024-07-24 09:15:45.087667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:13.687 [2024-07-24 09:15:45.087697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.087983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.087997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:74624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:74640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:74664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:74680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:74688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:74704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:74728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:74736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:74752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.687 [2024-07-24 09:15:45.088659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:74768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.687 [2024-07-24 09:15:45.088673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:74776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:74784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:74840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:74848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.088980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.088995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:74856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.089010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:74864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.089042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:74872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.089072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.089112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:74888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.089145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:74896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:13.688 [2024-07-24 09:15:45.089175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74904 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74920 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74928 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089418] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089429] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74936 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74944 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74952 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74960 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74968 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74976 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74984 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089767] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74992 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089816] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74504 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.688 [2024-07-24 09:15:45.089865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:13.688 [2024-07-24 09:15:45.089876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:13.688 [2024-07-24 09:15:45.089887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74512 len:8 PRP1 0x0 PRP2 0x0 00:30:13.688 [2024-07-24 09:15:45.089899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.689 [2024-07-24 09:15:45.089961] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16f5920 was disconnected and freed. reset controller. 00:30:13.689 [2024-07-24 09:15:45.089980] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:13.689 [2024-07-24 09:15:45.090014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.689 [2024-07-24 09:15:45.090032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.689 [2024-07-24 09:15:45.090047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.689 [2024-07-24 09:15:45.090061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.689 [2024-07-24 09:15:45.090075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.689 [2024-07-24 09:15:45.090089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.689 [2024-07-24 09:15:45.090116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:13.689 [2024-07-24 09:15:45.090131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:13.689 [2024-07-24 09:15:45.090146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:13.689 [2024-07-24 09:15:45.090195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16d1850 (9): Bad file descriptor 00:30:13.689 [2024-07-24 09:15:45.093519] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:13.689 [2024-07-24 09:15:45.167541] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:13.689 00:30:13.689 Latency(us) 00:30:13.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.689 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:13.689 Verification LBA range: start 0x0 length 0x4000 00:30:13.689 NVMe0n1 : 15.00 8349.74 32.62 1122.69 0.00 13485.98 776.72 22136.60 00:30:13.689 =================================================================================================================== 00:30:13.689 Total : 8349.74 32.62 1122.69 0.00 13485.98 776.72 22136.60 00:30:13.689 Received shutdown signal, test time was about 15.000000 seconds 00:30:13.689 00:30:13.689 Latency(us) 00:30:13.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:13.689 =================================================================================================================== 00:30:13.689 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3885337 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3885337 /var/tmp/bdevperf.sock 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3885337 ']' 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:13.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:13.689 [2024-07-24 09:15:51.569602] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:13.689 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:13.969 [2024-07-24 09:15:51.806268] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:13.969 09:15:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.534 NVMe0n1 00:30:14.535 09:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:14.792 00:30:14.792 09:15:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.050 00:30:15.050 09:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:15.050 09:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:15.307 09:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:15.564 09:15:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:18.840 09:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:18.840 09:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:18.840 09:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3886009 00:30:18.840 09:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3886009 00:30:18.840 09:15:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:20.213 0 00:30:20.213 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:20.213 [2024-07-24 09:15:51.101044] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:30:20.213 [2024-07-24 09:15:51.101161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3885337 ] 00:30:20.213 EAL: No free 2048 kB hugepages reported on node 1 00:30:20.213 [2024-07-24 09:15:51.132142] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:20.213 [2024-07-24 09:15:51.160645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.213 [2024-07-24 09:15:51.242245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.213 [2024-07-24 09:15:53.599569] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:20.213 [2024-07-24 09:15:53.599649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.213 [2024-07-24 09:15:53.599671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.213 [2024-07-24 09:15:53.599703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.213 [2024-07-24 09:15:53.599717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.213 [2024-07-24 09:15:53.599732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.213 [2024-07-24 09:15:53.599745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.213 [2024-07-24 09:15:53.599759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:20.213 [2024-07-24 09:15:53.599772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:20.213 [2024-07-24 09:15:53.599785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:20.213 [2024-07-24 09:15:53.599832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:20.213 [2024-07-24 09:15:53.599864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11bf850 (9): Bad file descriptor 00:30:20.213 [2024-07-24 09:15:53.607117] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:20.213 Running I/O for 1 seconds... 00:30:20.213 00:30:20.213 Latency(us) 00:30:20.213 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.213 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:20.213 Verification LBA range: start 0x0 length 0x4000 00:30:20.213 NVMe0n1 : 1.01 8733.22 34.11 0.00 0.00 14598.98 3070.48 11747.93 00:30:20.213 =================================================================================================================== 00:30:20.213 Total : 8733.22 34.11 0.00 0.00 14598.98 3070.48 11747.93 00:30:20.213 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:20.213 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:20.213 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.471 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:20.471 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:20.728 09:15:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.986 09:15:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3885337 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3885337 ']' 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3885337 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3885337 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3885337' 00:30:24.265 killing process with pid 3885337 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3885337 00:30:24.265 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3885337 00:30:24.523 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:24.523 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:24.780 rmmod nvme_tcp 00:30:24.780 rmmod nvme_fabrics 00:30:24.780 rmmod nvme_keyring 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3883077 ']' 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3883077 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3883077 ']' 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3883077 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:24.780 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3883077 00:30:25.038 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:25.038 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:25.038 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3883077' 00:30:25.038 killing process with pid 3883077 00:30:25.038 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3883077 00:30:25.038 09:16:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3883077 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:25.296 09:16:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.199 09:16:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:27.199 00:30:27.199 real 0m35.057s 00:30:27.199 user 2m1.852s 00:30:27.199 sys 0m6.580s 00:30:27.199 09:16:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:27.199 09:16:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:27.199 ************************************ 00:30:27.199 END TEST nvmf_failover 00:30:27.199 ************************************ 00:30:27.200 09:16:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:27.200 09:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:27.200 09:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:27.200 09:16:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:27.200 ************************************ 00:30:27.200 START TEST nvmf_host_discovery 00:30:27.200 ************************************ 00:30:27.200 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:27.458 * Looking for test storage... 00:30:27.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:27.458 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:27.459 09:16:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:29.361 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:29.362 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:29.362 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:29.362 Found net devices under 0000:09:00.0: cvl_0_0 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:29.362 Found net devices under 0000:09:00.1: cvl_0_1 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:29.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:29.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:30:29.362 00:30:29.362 --- 10.0.0.2 ping statistics --- 00:30:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.362 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:29.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:29.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:30:29.362 00:30:29.362 --- 10.0.0.1 ping statistics --- 00:30:29.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:29.362 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3888600 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3888600 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3888600 ']' 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:29.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.362 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.362 [2024-07-24 09:16:07.444033] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:30:29.362 [2024-07-24 09:16:07.444132] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:29.621 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.621 [2024-07-24 09:16:07.482119] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:29.621 [2024-07-24 09:16:07.514779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.621 [2024-07-24 09:16:07.605056] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:29.621 [2024-07-24 09:16:07.605132] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:29.621 [2024-07-24 09:16:07.605150] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:29.621 [2024-07-24 09:16:07.605163] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:29.621 [2024-07-24 09:16:07.605175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:29.621 [2024-07-24 09:16:07.605213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:29.621 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:29.621 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:29.621 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:29.621 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:29.621 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.879 [2024-07-24 09:16:07.753409] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.879 [2024-07-24 09:16:07.761607] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.879 null0 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.879 null1 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3888634 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3888634 /tmp/host.sock 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3888634 ']' 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:29.879 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:29.880 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:29.880 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:29.880 09:16:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:29.880 [2024-07-24 09:16:07.841213] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:30:29.880 [2024-07-24 09:16:07.841302] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3888634 ] 00:30:29.880 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.880 [2024-07-24 09:16:07.875853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:29.880 [2024-07-24 09:16:07.904503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.880 [2024-07-24 09:16:07.992437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.138 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.139 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:30.139 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.452 [2024-07-24 09:16:08.383250] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:30.452 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:30.453 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:30.721 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:30.721 09:16:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:31.287 [2024-07-24 09:16:09.162973] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:31.287 [2024-07-24 09:16:09.163017] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:31.287 [2024-07-24 09:16:09.163051] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:31.287 [2024-07-24 09:16:09.290443] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:31.287 [2024-07-24 09:16:09.393220] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:31.287 [2024-07-24 09:16:09.393243] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.544 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:31.545 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:31.803 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:32.061 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:32.062 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:32.062 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.062 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.062 09:16:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.062 [2024-07-24 09:16:10.036340] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:32.062 [2024-07-24 09:16:10.037650] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:32.062 [2024-07-24 09:16:10.037699] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:32.062 [2024-07-24 09:16:10.163529] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:32.062 09:16:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:32.320 [2024-07-24 09:16:10.267264] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:32.320 [2024-07-24 09:16:10.267289] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:32.320 [2024-07-24 09:16:10.267298] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.257 [2024-07-24 09:16:11.264409] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:33.257 [2024-07-24 09:16:11.264453] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:33.257 [2024-07-24 09:16:11.273642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:33.257 [2024-07-24 09:16:11.273678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.257 [2024-07-24 09:16:11.273698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:33.257 [2024-07-24 09:16:11.273713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.257 [2024-07-24 09:16:11.273728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:33.257 [2024-07-24 09:16:11.273743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.257 [2024-07-24 09:16:11.273759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:33.257 [2024-07-24 09:16:11.273773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:33.257 [2024-07-24 09:16:11.273788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.257 [2024-07-24 09:16:11.283643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.257 [2024-07-24 09:16:11.293687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:33.257 [2024-07-24 09:16:11.293922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.257 [2024-07-24 09:16:11.293955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8f6e0 with addr=10.0.0.2, port=4420 00:30:33.257 [2024-07-24 09:16:11.293974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.257 [2024-07-24 09:16:11.294000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.257 [2024-07-24 09:16:11.294025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:33.257 [2024-07-24 09:16:11.294041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:33.257 [2024-07-24 09:16:11.294060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:33.257 [2024-07-24 09:16:11.294088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.257 [2024-07-24 09:16:11.303776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:33.257 [2024-07-24 09:16:11.303997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.257 [2024-07-24 09:16:11.304028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8f6e0 with addr=10.0.0.2, port=4420 00:30:33.257 [2024-07-24 09:16:11.304046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.257 [2024-07-24 09:16:11.304070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.257 [2024-07-24 09:16:11.304125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:33.257 [2024-07-24 09:16:11.304145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:33.257 [2024-07-24 09:16:11.304176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:33.257 [2024-07-24 09:16:11.304195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:33.257 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:33.257 [2024-07-24 09:16:11.313855] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:33.257 [2024-07-24 09:16:11.314079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.257 [2024-07-24 09:16:11.314118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8f6e0 with addr=10.0.0.2, port=4420 00:30:33.257 [2024-07-24 09:16:11.314155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.258 [2024-07-24 09:16:11.314188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.258 [2024-07-24 09:16:11.314209] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:33.258 [2024-07-24 09:16:11.314223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:33.258 [2024-07-24 09:16:11.314236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:33.258 [2024-07-24 09:16:11.314255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.258 [2024-07-24 09:16:11.323938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:33.258 [2024-07-24 09:16:11.324129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.258 [2024-07-24 09:16:11.324164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8f6e0 with addr=10.0.0.2, port=4420 00:30:33.258 [2024-07-24 09:16:11.324181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.258 [2024-07-24 09:16:11.324204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.258 [2024-07-24 09:16:11.324249] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:33.258 [2024-07-24 09:16:11.324268] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:33.258 [2024-07-24 09:16:11.324282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:33.258 [2024-07-24 09:16:11.324301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.258 [2024-07-24 09:16:11.334025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:33.258 [2024-07-24 09:16:11.334228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.258 [2024-07-24 09:16:11.334256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8f6e0 with addr=10.0.0.2, port=4420 00:30:33.258 [2024-07-24 09:16:11.334272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.258 [2024-07-24 09:16:11.334294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.258 [2024-07-24 09:16:11.334326] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:33.258 [2024-07-24 09:16:11.334343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:33.258 [2024-07-24 09:16:11.334356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:33.258 [2024-07-24 09:16:11.334374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.258 [2024-07-24 09:16:11.344118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:33.258 [2024-07-24 09:16:11.344289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:33.258 [2024-07-24 09:16:11.344316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd8f6e0 with addr=10.0.0.2, port=4420 00:30:33.258 [2024-07-24 09:16:11.344332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8f6e0 is same with the state(5) to be set 00:30:33.258 [2024-07-24 09:16:11.344359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8f6e0 (9): Bad file descriptor 00:30:33.258 [2024-07-24 09:16:11.344404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:33.258 [2024-07-24 09:16:11.344422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:33.258 [2024-07-24 09:16:11.344436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:33.258 [2024-07-24 09:16:11.344455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:33.258 [2024-07-24 09:16:11.350436] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:33.258 [2024-07-24 09:16:11.350480] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:33.258 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.516 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:33.517 09:16:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.891 [2024-07-24 09:16:12.630905] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:34.891 [2024-07-24 09:16:12.630942] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:34.891 [2024-07-24 09:16:12.630964] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:34.891 [2024-07-24 09:16:12.759397] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:34.891 [2024-07-24 09:16:12.825640] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:34.891 [2024-07-24 09:16:12.825683] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.891 request: 00:30:34.891 { 00:30:34.891 "name": "nvme", 00:30:34.891 "trtype": "tcp", 00:30:34.891 "traddr": "10.0.0.2", 00:30:34.891 "adrfam": "ipv4", 00:30:34.891 "trsvcid": "8009", 00:30:34.891 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:34.891 "wait_for_attach": true, 00:30:34.891 "method": "bdev_nvme_start_discovery", 00:30:34.891 "req_id": 1 00:30:34.891 } 00:30:34.891 Got JSON-RPC error response 00:30:34.891 response: 00:30:34.891 { 00:30:34.891 "code": -17, 00:30:34.891 "message": "File exists" 00:30:34.891 } 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.891 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.891 request: 00:30:34.892 { 00:30:34.892 "name": "nvme_second", 00:30:34.892 "trtype": "tcp", 00:30:34.892 "traddr": "10.0.0.2", 00:30:34.892 "adrfam": "ipv4", 00:30:34.892 "trsvcid": "8009", 00:30:34.892 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:34.892 "wait_for_attach": true, 00:30:34.892 "method": "bdev_nvme_start_discovery", 00:30:34.892 "req_id": 1 00:30:34.892 } 00:30:34.892 Got JSON-RPC error response 00:30:34.892 response: 00:30:34.892 { 00:30:34.892 "code": -17, 00:30:34.892 "message": "File exists" 00:30:34.892 } 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:34.892 09:16:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:35.150 09:16:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:36.094 [2024-07-24 09:16:14.045935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:36.094 [2024-07-24 09:16:14.046001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90aa0 with addr=10.0.0.2, port=8010 00:30:36.094 [2024-07-24 09:16:14.046034] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:36.094 [2024-07-24 09:16:14.046050] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:36.094 [2024-07-24 09:16:14.046063] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:37.028 [2024-07-24 09:16:15.048601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:37.028 [2024-07-24 09:16:15.048668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd90aa0 with addr=10.0.0.2, port=8010 00:30:37.028 [2024-07-24 09:16:15.048701] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:37.028 [2024-07-24 09:16:15.048717] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:37.028 [2024-07-24 09:16:15.048747] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:37.962 [2024-07-24 09:16:16.050617] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:37.962 request: 00:30:37.962 { 00:30:37.962 "name": "nvme_second", 00:30:37.962 "trtype": "tcp", 00:30:37.962 "traddr": "10.0.0.2", 00:30:37.962 "adrfam": "ipv4", 00:30:37.962 "trsvcid": "8010", 00:30:37.962 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:37.962 "wait_for_attach": false, 00:30:37.962 "attach_timeout_ms": 3000, 00:30:37.962 "method": "bdev_nvme_start_discovery", 00:30:37.962 "req_id": 1 00:30:37.962 } 00:30:37.962 Got JSON-RPC error response 00:30:37.962 response: 00:30:37.962 { 00:30:37.962 "code": -110, 00:30:37.962 "message": "Connection timed out" 00:30:37.962 } 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:37.962 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3888634 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:38.220 rmmod nvme_tcp 00:30:38.220 rmmod nvme_fabrics 00:30:38.220 rmmod nvme_keyring 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3888600 ']' 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3888600 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3888600 ']' 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3888600 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:30:38.220 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:38.221 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3888600 00:30:38.221 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:38.221 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:38.221 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3888600' 00:30:38.221 killing process with pid 3888600 00:30:38.221 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3888600 00:30:38.221 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3888600 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:38.479 09:16:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.384 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:40.384 00:30:40.384 real 0m13.213s 00:30:40.384 user 0m19.215s 00:30:40.384 sys 0m2.787s 00:30:40.384 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:40.384 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:40.384 ************************************ 00:30:40.384 END TEST nvmf_host_discovery 00:30:40.384 ************************************ 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:40.644 ************************************ 00:30:40.644 START TEST nvmf_host_multipath_status 00:30:40.644 ************************************ 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:40.644 * Looking for test storage... 00:30:40.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.644 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:30:40.645 09:16:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:42.548 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:42.548 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:42.548 Found net devices under 0000:09:00.0: cvl_0_0 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:42.548 Found net devices under 0000:09:00.1: cvl_0_1 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.548 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:30:42.549 00:30:42.549 --- 10.0.0.2 ping statistics --- 00:30:42.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.549 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:30:42.549 00:30:42.549 --- 10.0.0.1 ping statistics --- 00:30:42.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.549 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:42.549 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3891792 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3891792 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3891792 ']' 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:42.808 09:16:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:42.808 [2024-07-24 09:16:20.729366] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:30:42.808 [2024-07-24 09:16:20.729467] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.808 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.808 [2024-07-24 09:16:20.766517] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:30:42.808 [2024-07-24 09:16:20.795047] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:42.808 [2024-07-24 09:16:20.888565] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.808 [2024-07-24 09:16:20.888623] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.808 [2024-07-24 09:16:20.888637] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.808 [2024-07-24 09:16:20.888649] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.808 [2024-07-24 09:16:20.888659] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.808 [2024-07-24 09:16:20.888751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.808 [2024-07-24 09:16:20.888755] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3891792 00:30:43.066 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:43.324 [2024-07-24 09:16:21.305369] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.324 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:43.582 Malloc0 00:30:43.582 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:30:43.840 09:16:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:44.098 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:44.356 [2024-07-24 09:16:22.442201] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:44.356 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:44.614 [2024-07-24 09:16:22.682852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3891958 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3891958 /var/tmp/bdevperf.sock 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3891958 ']' 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:44.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:44.614 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:30:45.180 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:45.180 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:30:45.180 09:16:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:45.180 09:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:30:45.743 Nvme0n1 00:30:45.743 09:16:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:30:46.000 Nvme0n1 00:30:46.001 09:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:30:46.001 09:16:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:30:48.529 09:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:30:48.529 09:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:30:48.529 09:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:48.529 09:16:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:49.903 09:16:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:50.161 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:50.161 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:50.161 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.161 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:50.419 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.419 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:50.419 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.419 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:50.678 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.678 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:50.678 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.678 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:50.936 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:50.936 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:50.936 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:50.936 09:16:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:51.194 09:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:51.194 09:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:30:51.194 09:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:51.452 09:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:51.710 09:16:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:30:52.643 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:30:52.643 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:30:52.643 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.643 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:52.900 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:52.900 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:30:52.900 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:52.900 09:16:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:53.157 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.157 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:53.157 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.157 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:53.415 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.415 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:53.415 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.415 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:53.673 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.673 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:53.673 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.673 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:53.931 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:53.931 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:53.931 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:53.931 09:16:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:54.189 09:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:54.189 09:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:30:54.189 09:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:54.447 09:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:30:54.706 09:16:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:30:55.691 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:30:55.691 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:55.691 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.691 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:55.948 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:55.948 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:55.948 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:55.948 09:16:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:56.206 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:56.206 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:56.206 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.206 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:56.463 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.463 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:56.463 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.463 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:56.721 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.721 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:56.721 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.721 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:56.978 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:56.978 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:30:56.978 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:56.978 09:16:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:30:57.235 09:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:57.235 09:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:30:57.235 09:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:57.493 09:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:30:57.751 09:16:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:30:58.685 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:30:58.685 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:30:58.685 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.685 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:30:58.942 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:58.942 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:30:58.942 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:58.942 09:16:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:30:59.199 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:30:59.199 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:30:59.199 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.199 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:30:59.457 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.457 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:30:59.457 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.457 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:30:59.715 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.715 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:30:59.715 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.715 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:30:59.973 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:30:59.973 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:30:59.973 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:30:59.973 09:16:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:00.231 09:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:00.231 09:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:00.231 09:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:00.488 09:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:00.746 09:16:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:01.677 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:01.677 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:01.677 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.677 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:01.935 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:01.935 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:01.935 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:01.935 09:16:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:02.192 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.192 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:02.192 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.192 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:02.449 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.449 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:02.449 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.449 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:02.707 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:02.707 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:02.707 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.707 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:02.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:02.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:02.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:02.965 09:16:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:03.222 09:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:03.222 09:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:03.222 09:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:03.480 09:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:03.738 09:16:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:04.672 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:04.672 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:04.672 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.672 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:04.930 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:04.930 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:04.930 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:04.930 09:16:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:05.189 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.189 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:05.189 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.189 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:05.446 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.446 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:05.446 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.446 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:05.703 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:05.703 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:05.703 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.703 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:05.961 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:05.961 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:05.961 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:05.961 09:16:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:06.221 09:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:06.221 09:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:06.479 09:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:06.479 09:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:06.738 09:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:06.996 09:16:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:07.930 09:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:07.930 09:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:07.930 09:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:07.930 09:16:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:08.188 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.188 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:08.188 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.188 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:08.446 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.446 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:08.446 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.446 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:08.704 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.704 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:08.704 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.704 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:08.962 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:08.962 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:08.962 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:08.962 09:16:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:09.220 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.220 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:09.220 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:09.220 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:09.477 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:09.477 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:09.477 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:09.735 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:09.993 09:16:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:10.926 09:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:10.926 09:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:10.926 09:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:10.926 09:16:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:11.184 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:11.184 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:11.184 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.184 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:11.442 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.442 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:11.442 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.442 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:11.700 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.700 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:11.700 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.700 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:11.964 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:11.964 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:11.964 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:11.964 09:16:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:12.220 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.220 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:12.220 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.220 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:12.478 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.478 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:12.478 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:12.737 09:16:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:12.994 09:16:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:13.926 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:13.926 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:13.926 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.926 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.183 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.183 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:14.183 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.183 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.442 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.442 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.442 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.442 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.700 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.700 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.700 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.700 09:16:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:14.958 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.958 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:14.958 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.958 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:15.217 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.217 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:15.217 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.217 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:15.474 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:15.474 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:15.474 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:15.731 09:16:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:15.990 09:16:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:16.923 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:16.923 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:16.923 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.923 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:17.181 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.181 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:17.181 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.181 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:17.439 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:17.439 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:17.439 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.439 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:17.696 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.696 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:17.696 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.696 09:16:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:17.954 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.954 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:17.954 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.954 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:18.213 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:18.213 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:18.213 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:18.213 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3891958 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3891958 ']' 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3891958 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3891958 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3891958' 00:31:18.471 killing process with pid 3891958 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3891958 00:31:18.471 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3891958 00:31:18.743 Connection closed with partial response: 00:31:18.743 00:31:18.743 00:31:18.743 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3891958 00:31:18.743 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:18.743 [2024-07-24 09:16:22.740436] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:31:18.743 [2024-07-24 09:16:22.740529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3891958 ] 00:31:18.743 EAL: No free 2048 kB hugepages reported on node 1 00:31:18.743 [2024-07-24 09:16:22.773994] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:18.743 [2024-07-24 09:16:22.803097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.743 [2024-07-24 09:16:22.891768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:18.743 Running I/O for 90 seconds... 00:31:18.743 [2024-07-24 09:16:38.412697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.412757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.412797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.412817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.412843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.412860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.412883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.412900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.412923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.412940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.412963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.412979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.743 [2024-07-24 09:16:38.413915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.743 [2024-07-24 09:16:38.413947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.413971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.413988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.414978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.414999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.744 [2024-07-24 09:16:38.415645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.744 [2024-07-24 09:16:38.415666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.415968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.415984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.416422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.416439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.745 [2024-07-24 09:16:38.417976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.745 [2024-07-24 09:16:38.417992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.418798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.418835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.418873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.418927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.418950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.418982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.419023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.419063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.746 [2024-07-24 09:16:38.419111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.746 [2024-07-24 09:16:38.419380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.746 [2024-07-24 09:16:38.419411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.419752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.419767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.420980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.420996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.747 [2024-07-24 09:16:38.421545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.747 [2024-07-24 09:16:38.421568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.421981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.748 [2024-07-24 09:16:38.422521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.748 [2024-07-24 09:16:38.422538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.422979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.422996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.423017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.423033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.423054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.423070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.423115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.423134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.423157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.423174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.423945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.423969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.423996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.424314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.424337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.435968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.435984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.436004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.436020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.436041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.436061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.749 [2024-07-24 09:16:38.436098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.749 [2024-07-24 09:16:38.436129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.436933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.436969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.436991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.437006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.437047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.437099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.437152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.437192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.750 [2024-07-24 09:16:38.437232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.750 [2024-07-24 09:16:38.437517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.750 [2024-07-24 09:16:38.437532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.437812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.437828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.438973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.438996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.751 [2024-07-24 09:16:38.439637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.751 [2024-07-24 09:16:38.439658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.439977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.439995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.440967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.440988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.441003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.441024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.441039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.441060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.441075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.441121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.441140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.752 [2024-07-24 09:16:38.441164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.752 [2024-07-24 09:16:38.441181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.441204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.441221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.441245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.441262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.442041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.442094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.442147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.442187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.442227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.753 [2024-07-24 09:16:38.442266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.753 [2024-07-24 09:16:38.442291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.442963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.442985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.754 [2024-07-24 09:16:38.443476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.754 [2024-07-24 09:16:38.443492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.443766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.443805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.443845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.443885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.443940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.443964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.443981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.444021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.755 [2024-07-24 09:16:38.444060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.444628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.444643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.755 [2024-07-24 09:16:38.445784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.755 [2024-07-24 09:16:38.445801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.445823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.445839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.445882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.445900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.445922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.445938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.445959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.445974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.445996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.446978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.446994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.756 [2024-07-24 09:16:38.447376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.756 [2024-07-24 09:16:38.447403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.447976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.447992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.448751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.448802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.448844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.448884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.448924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.448963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.448986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.757 [2024-07-24 09:16:38.449701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.757 [2024-07-24 09:16:38.449716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.449969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.449990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.758 [2024-07-24 09:16:38.450800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.450968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.450989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.451004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.451025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.758 [2024-07-24 09:16:38.451041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.758 [2024-07-24 09:16:38.451062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.451123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.451165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.451213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.451252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.451293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.451333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.451351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.452972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.452988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.759 [2024-07-24 09:16:38.453406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.759 [2024-07-24 09:16:38.453422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.453966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.453988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.454422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.454443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.461571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.461586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.760 [2024-07-24 09:16:38.462618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.760 [2024-07-24 09:16:38.462634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.462973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.462988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.463979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.463995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.464016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.464031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.464052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.464087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.464120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.464154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.464178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.761 [2024-07-24 09:16:38.464202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.464226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.761 [2024-07-24 09:16:38.464243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.761 [2024-07-24 09:16:38.464267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.761 [2024-07-24 09:16:38.464284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.762 [2024-07-24 09:16:38.464324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.762 [2024-07-24 09:16:38.464363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.762 [2024-07-24 09:16:38.464403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.762 [2024-07-24 09:16:38.464442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.762 [2024-07-24 09:16:38.464481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.464931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.464946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.465965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.465981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.762 [2024-07-24 09:16:38.466553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.762 [2024-07-24 09:16:38.466573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.466973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.466988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.763 [2024-07-24 09:16:38.467926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.763 [2024-07-24 09:16:38.467942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.467964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.467980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.468331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:59096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:59128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:59136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:59144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:59160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:59168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:59184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:59192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:59208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:59224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:59240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:59248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:59256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.469969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.469990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:59272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:59280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:59288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:59296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:59312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.764 [2024-07-24 09:16:38.470276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:59320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.764 [2024-07-24 09:16:38.470292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:59328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:59344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:59352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:59360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:59368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:59376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:59408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:59416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:59424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:59432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:59440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:59448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.470933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:58496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.470973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.470995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.471011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.471067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.471114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.471160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.471200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.765 [2024-07-24 09:16:38.471239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:59456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:59464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:59472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:59480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.471918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.471941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.472006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.472043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.472073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.472091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.472131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.472151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.765 [2024-07-24 09:16:38.472179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:58608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.765 [2024-07-24 09:16:38.472197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:58728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.472978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.472994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:58760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:58768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:58800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:58824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:58888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:58904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.766 [2024-07-24 09:16:38.473902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:18.766 [2024-07-24 09:16:38.473928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.473944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.473970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:58928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.473986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:58952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:58984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:59000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:59024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:59056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:59064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:59072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.474826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:59080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.474842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:38.475019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:59088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:38.475040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.767 [2024-07-24 09:16:53.997762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:18.767 [2024-07-24 09:16:53.997785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.997801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:34624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:34704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:34752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.998970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.998992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:34800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:34880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.768 [2024-07-24 09:16:53.999552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:34280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.768 [2024-07-24 09:16:53.999588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.768 [2024-07-24 09:16:53.999625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:34344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.768 [2024-07-24 09:16:53.999662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.768 [2024-07-24 09:16:53.999698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:18.768 [2024-07-24 09:16:53.999720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.768 [2024-07-24 09:16:53.999735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:53.999757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:34440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:18.769 [2024-07-24 09:16:53.999772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:35016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:35032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:35064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:35112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:35128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:35144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:35176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.001978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.001994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.002015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:35240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.002031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.002053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:35256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:18.769 [2024-07-24 09:16:54.002121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:35272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:18.769 [2024-07-24 09:16:54.002140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:18.769 Received shutdown signal, test time was about 32.356485 seconds 00:31:18.769 00:31:18.769 Latency(us) 00:31:18.769 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:18.769 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:18.769 Verification LBA range: start 0x0 length 0x4000 00:31:18.769 Nvme0n1 : 32.36 7903.23 30.87 0.00 0.00 16166.50 430.84 4076242.11 00:31:18.769 =================================================================================================================== 00:31:18.769 Total : 7903.23 30.87 0.00 0.00 16166.50 430.84 4076242.11 00:31:18.769 09:16:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:19.028 rmmod nvme_tcp 00:31:19.028 rmmod nvme_fabrics 00:31:19.028 rmmod nvme_keyring 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3891792 ']' 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3891792 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3891792 ']' 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3891792 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:19.028 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3891792 00:31:19.286 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:19.286 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:19.286 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3891792' 00:31:19.286 killing process with pid 3891792 00:31:19.286 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3891792 00:31:19.286 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3891792 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:19.546 09:16:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:21.450 00:31:21.450 real 0m40.917s 00:31:21.450 user 2m3.542s 00:31:21.450 sys 0m10.355s 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:21.450 ************************************ 00:31:21.450 END TEST nvmf_host_multipath_status 00:31:21.450 ************************************ 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:21.450 ************************************ 00:31:21.450 START TEST nvmf_discovery_remove_ifc 00:31:21.450 ************************************ 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:21.450 * Looking for test storage... 00:31:21.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:21.450 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:21.709 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:21.710 09:16:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.614 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:23.615 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:23.615 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:23.615 Found net devices under 0000:09:00.0: cvl_0_0 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:23.615 Found net devices under 0000:09:00.1: cvl_0_1 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:23.615 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:23.874 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.874 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:31:23.874 00:31:23.874 --- 10.0.0.2 ping statistics --- 00:31:23.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.874 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:23.874 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.874 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:31:23.874 00:31:23.874 --- 10.0.0.1 ping statistics --- 00:31:23.874 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.874 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3898140 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3898140 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3898140 ']' 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:23.874 09:17:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:23.874 [2024-07-24 09:17:01.842681] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:31:23.874 [2024-07-24 09:17:01.842756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.874 EAL: No free 2048 kB hugepages reported on node 1 00:31:23.874 [2024-07-24 09:17:01.879920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:23.874 [2024-07-24 09:17:01.906969] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.132 [2024-07-24 09:17:01.994505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:24.132 [2024-07-24 09:17:01.994552] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:24.132 [2024-07-24 09:17:01.994566] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:24.132 [2024-07-24 09:17:01.994578] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:24.132 [2024-07-24 09:17:01.994588] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:24.132 [2024-07-24 09:17:01.994613] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.132 [2024-07-24 09:17:02.131950] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:24.132 [2024-07-24 09:17:02.140154] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:24.132 null0 00:31:24.132 [2024-07-24 09:17:02.172087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3898170 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3898170 /tmp/host.sock 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3898170 ']' 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:24.132 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:24.132 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.132 [2024-07-24 09:17:02.235607] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:31:24.132 [2024-07-24 09:17:02.235672] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3898170 ] 00:31:24.391 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.391 [2024-07-24 09:17:02.267617] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:31:24.391 [2024-07-24 09:17:02.297595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:24.391 [2024-07-24 09:17:02.388579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.391 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:24.650 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.650 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:24.650 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.650 09:17:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.583 [2024-07-24 09:17:03.559615] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:25.583 [2024-07-24 09:17:03.559643] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:25.583 [2024-07-24 09:17:03.559669] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:25.583 [2024-07-24 09:17:03.645932] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:25.840 [2024-07-24 09:17:03.871978] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:25.840 [2024-07-24 09:17:03.872042] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:25.840 [2024-07-24 09:17:03.872096] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:25.840 [2024-07-24 09:17:03.872128] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:25.840 [2024-07-24 09:17:03.872155] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:25.840 [2024-07-24 09:17:03.878185] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1403370 was disconnected and freed. delete nvme_qpair. 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:25.840 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:25.841 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:25.841 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:26.098 09:17:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.098 09:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:26.098 09:17:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:27.033 09:17:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:27.966 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:28.224 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:28.224 09:17:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:29.157 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:29.158 09:17:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:30.091 09:17:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:31.500 09:17:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:31.500 [2024-07-24 09:17:09.313588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:31.500 [2024-07-24 09:17:09.313658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.500 [2024-07-24 09:17:09.313683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.500 [2024-07-24 09:17:09.313711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.500 [2024-07-24 09:17:09.313729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.500 [2024-07-24 09:17:09.313745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.500 [2024-07-24 09:17:09.313761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.500 [2024-07-24 09:17:09.313777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.500 [2024-07-24 09:17:09.313793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.500 [2024-07-24 09:17:09.313810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:31.500 [2024-07-24 09:17:09.313826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:31.500 [2024-07-24 09:17:09.313841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9d70 is same with the state(5) to be set 00:31:31.500 [2024-07-24 09:17:09.323608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9d70 (9): Bad file descriptor 00:31:31.500 [2024-07-24 09:17:09.333654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.433 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:32.434 [2024-07-24 09:17:10.354175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:32.434 [2024-07-24 09:17:10.354241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c9d70 with addr=10.0.0.2, port=4420 00:31:32.434 [2024-07-24 09:17:10.354271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c9d70 is same with the state(5) to be set 00:31:32.434 [2024-07-24 09:17:10.354325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9d70 (9): Bad file descriptor 00:31:32.434 [2024-07-24 09:17:10.354810] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:32.434 [2024-07-24 09:17:10.354862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:32.434 [2024-07-24 09:17:10.354883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:32.434 [2024-07-24 09:17:10.354903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:32.434 [2024-07-24 09:17:10.354939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:32.434 [2024-07-24 09:17:10.354960] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:32.434 09:17:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:33.368 [2024-07-24 09:17:11.357474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:33.368 [2024-07-24 09:17:11.357529] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:33.368 [2024-07-24 09:17:11.357554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:33.368 [2024-07-24 09:17:11.357571] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:33.368 [2024-07-24 09:17:11.357604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:33.368 [2024-07-24 09:17:11.357648] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:33.368 [2024-07-24 09:17:11.357697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.368 [2024-07-24 09:17:11.357722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.368 [2024-07-24 09:17:11.357745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.368 [2024-07-24 09:17:11.357761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.368 [2024-07-24 09:17:11.357778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.368 [2024-07-24 09:17:11.357794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.368 [2024-07-24 09:17:11.357811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.368 [2024-07-24 09:17:11.357827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.368 [2024-07-24 09:17:11.357844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:33.368 [2024-07-24 09:17:11.357859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:33.368 [2024-07-24 09:17:11.357874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:33.368 [2024-07-24 09:17:11.357925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c9210 (9): Bad file descriptor 00:31:33.368 [2024-07-24 09:17:11.358920] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:33.368 [2024-07-24 09:17:11.358946] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:33.368 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:33.627 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:33.627 09:17:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:34.560 09:17:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.492 [2024-07-24 09:17:13.415916] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:35.492 [2024-07-24 09:17:13.415942] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:35.492 [2024-07-24 09:17:13.415968] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:35.492 [2024-07-24 09:17:13.543383] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:35.492 09:17:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:35.492 [2024-07-24 09:17:13.607082] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:35.492 [2024-07-24 09:17:13.607161] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:35.492 [2024-07-24 09:17:13.607198] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:35.492 [2024-07-24 09:17:13.607221] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:35.492 [2024-07-24 09:17:13.607235] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:35.749 [2024-07-24 09:17:13.614323] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x140c900 was disconnected and freed. delete nvme_qpair. 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3898170 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3898170 ']' 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3898170 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3898170 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3898170' 00:31:36.683 killing process with pid 3898170 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3898170 00:31:36.683 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3898170 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:36.942 rmmod nvme_tcp 00:31:36.942 rmmod nvme_fabrics 00:31:36.942 rmmod nvme_keyring 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3898140 ']' 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3898140 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3898140 ']' 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3898140 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3898140 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3898140' 00:31:36.942 killing process with pid 3898140 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3898140 00:31:36.942 09:17:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3898140 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.200 09:17:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.101 09:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:39.101 00:31:39.101 real 0m17.715s 00:31:39.101 user 0m25.597s 00:31:39.101 sys 0m3.110s 00:31:39.101 09:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:39.101 09:17:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:39.101 ************************************ 00:31:39.101 END TEST nvmf_discovery_remove_ifc 00:31:39.101 ************************************ 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:39.360 ************************************ 00:31:39.360 START TEST nvmf_identify_kernel_target 00:31:39.360 ************************************ 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:31:39.360 * Looking for test storage... 00:31:39.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.360 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:31:39.361 09:17:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:41.261 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:41.261 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:41.261 Found net devices under 0000:09:00.0: cvl_0_0 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.261 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:41.262 Found net devices under 0000:09:00.1: cvl_0_1 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:41.262 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:41.520 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:41.520 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:41.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:41.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:31:41.520 00:31:41.520 --- 10.0.0.2 ping statistics --- 00:31:41.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.520 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:41.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:41.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:31:41.521 00:31:41.521 --- 10.0.0.1 ping statistics --- 00:31:41.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:41.521 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:41.521 09:17:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:42.455 Waiting for block devices as requested 00:31:42.455 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:42.712 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:42.712 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:42.712 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:42.712 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:42.969 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:42.969 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:42.969 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:42.969 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:43.228 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.228 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:43.228 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:43.486 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:43.486 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:43.486 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:43.486 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:43.744 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:43.744 No valid GPT data, bailing 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:43.744 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:31:44.002 00:31:44.002 Discovery Log Number of Records 2, Generation counter 2 00:31:44.002 =====Discovery Log Entry 0====== 00:31:44.002 trtype: tcp 00:31:44.002 adrfam: ipv4 00:31:44.002 subtype: current discovery subsystem 00:31:44.002 treq: not specified, sq flow control disable supported 00:31:44.002 portid: 1 00:31:44.002 trsvcid: 4420 00:31:44.002 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:44.002 traddr: 10.0.0.1 00:31:44.002 eflags: none 00:31:44.002 sectype: none 00:31:44.002 =====Discovery Log Entry 1====== 00:31:44.002 trtype: tcp 00:31:44.002 adrfam: ipv4 00:31:44.002 subtype: nvme subsystem 00:31:44.002 treq: not specified, sq flow control disable supported 00:31:44.002 portid: 1 00:31:44.002 trsvcid: 4420 00:31:44.002 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:44.002 traddr: 10.0.0.1 00:31:44.002 eflags: none 00:31:44.002 sectype: none 00:31:44.002 09:17:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:31:44.002 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:31:44.002 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.002 ===================================================== 00:31:44.002 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:31:44.002 ===================================================== 00:31:44.002 Controller Capabilities/Features 00:31:44.002 ================================ 00:31:44.002 Vendor ID: 0000 00:31:44.002 Subsystem Vendor ID: 0000 00:31:44.002 Serial Number: 3470935435a7dc8465f0 00:31:44.002 Model Number: Linux 00:31:44.002 Firmware Version: 6.7.0-68 00:31:44.002 Recommended Arb Burst: 0 00:31:44.002 IEEE OUI Identifier: 00 00 00 00:31:44.002 Multi-path I/O 00:31:44.002 May have multiple subsystem ports: No 00:31:44.002 May have multiple controllers: No 00:31:44.002 Associated with SR-IOV VF: No 00:31:44.002 Max Data Transfer Size: Unlimited 00:31:44.002 Max Number of Namespaces: 0 00:31:44.002 Max Number of I/O Queues: 1024 00:31:44.002 NVMe Specification Version (VS): 1.3 00:31:44.002 NVMe Specification Version (Identify): 1.3 00:31:44.002 Maximum Queue Entries: 1024 00:31:44.002 Contiguous Queues Required: No 00:31:44.002 Arbitration Mechanisms Supported 00:31:44.002 Weighted Round Robin: Not Supported 00:31:44.002 Vendor Specific: Not Supported 00:31:44.002 Reset Timeout: 7500 ms 00:31:44.002 Doorbell Stride: 4 bytes 00:31:44.002 NVM Subsystem Reset: Not Supported 00:31:44.002 Command Sets Supported 00:31:44.002 NVM Command Set: Supported 00:31:44.002 Boot Partition: Not Supported 00:31:44.002 Memory Page Size Minimum: 4096 bytes 00:31:44.002 Memory Page Size Maximum: 4096 bytes 00:31:44.002 Persistent Memory Region: Not Supported 00:31:44.002 Optional Asynchronous Events Supported 00:31:44.002 Namespace Attribute Notices: Not Supported 00:31:44.002 Firmware Activation Notices: Not Supported 00:31:44.002 ANA Change Notices: Not Supported 00:31:44.002 PLE Aggregate Log Change Notices: Not Supported 00:31:44.002 LBA Status Info Alert Notices: Not Supported 00:31:44.002 EGE Aggregate Log Change Notices: Not Supported 00:31:44.002 Normal NVM Subsystem Shutdown event: Not Supported 00:31:44.002 Zone Descriptor Change Notices: Not Supported 00:31:44.002 Discovery Log Change Notices: Supported 00:31:44.002 Controller Attributes 00:31:44.002 128-bit Host Identifier: Not Supported 00:31:44.002 Non-Operational Permissive Mode: Not Supported 00:31:44.002 NVM Sets: Not Supported 00:31:44.002 Read Recovery Levels: Not Supported 00:31:44.002 Endurance Groups: Not Supported 00:31:44.002 Predictable Latency Mode: Not Supported 00:31:44.002 Traffic Based Keep ALive: Not Supported 00:31:44.002 Namespace Granularity: Not Supported 00:31:44.002 SQ Associations: Not Supported 00:31:44.002 UUID List: Not Supported 00:31:44.002 Multi-Domain Subsystem: Not Supported 00:31:44.002 Fixed Capacity Management: Not Supported 00:31:44.003 Variable Capacity Management: Not Supported 00:31:44.003 Delete Endurance Group: Not Supported 00:31:44.003 Delete NVM Set: Not Supported 00:31:44.003 Extended LBA Formats Supported: Not Supported 00:31:44.003 Flexible Data Placement Supported: Not Supported 00:31:44.003 00:31:44.003 Controller Memory Buffer Support 00:31:44.003 ================================ 00:31:44.003 Supported: No 00:31:44.003 00:31:44.003 Persistent Memory Region Support 00:31:44.003 ================================ 00:31:44.003 Supported: No 00:31:44.003 00:31:44.003 Admin Command Set Attributes 00:31:44.003 ============================ 00:31:44.003 Security Send/Receive: Not Supported 00:31:44.003 Format NVM: Not Supported 00:31:44.003 Firmware Activate/Download: Not Supported 00:31:44.003 Namespace Management: Not Supported 00:31:44.003 Device Self-Test: Not Supported 00:31:44.003 Directives: Not Supported 00:31:44.003 NVMe-MI: Not Supported 00:31:44.003 Virtualization Management: Not Supported 00:31:44.003 Doorbell Buffer Config: Not Supported 00:31:44.003 Get LBA Status Capability: Not Supported 00:31:44.003 Command & Feature Lockdown Capability: Not Supported 00:31:44.003 Abort Command Limit: 1 00:31:44.003 Async Event Request Limit: 1 00:31:44.003 Number of Firmware Slots: N/A 00:31:44.003 Firmware Slot 1 Read-Only: N/A 00:31:44.003 Firmware Activation Without Reset: N/A 00:31:44.003 Multiple Update Detection Support: N/A 00:31:44.003 Firmware Update Granularity: No Information Provided 00:31:44.003 Per-Namespace SMART Log: No 00:31:44.003 Asymmetric Namespace Access Log Page: Not Supported 00:31:44.003 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:31:44.003 Command Effects Log Page: Not Supported 00:31:44.003 Get Log Page Extended Data: Supported 00:31:44.003 Telemetry Log Pages: Not Supported 00:31:44.003 Persistent Event Log Pages: Not Supported 00:31:44.003 Supported Log Pages Log Page: May Support 00:31:44.003 Commands Supported & Effects Log Page: Not Supported 00:31:44.003 Feature Identifiers & Effects Log Page:May Support 00:31:44.003 NVMe-MI Commands & Effects Log Page: May Support 00:31:44.003 Data Area 4 for Telemetry Log: Not Supported 00:31:44.003 Error Log Page Entries Supported: 1 00:31:44.003 Keep Alive: Not Supported 00:31:44.003 00:31:44.003 NVM Command Set Attributes 00:31:44.003 ========================== 00:31:44.003 Submission Queue Entry Size 00:31:44.003 Max: 1 00:31:44.003 Min: 1 00:31:44.003 Completion Queue Entry Size 00:31:44.003 Max: 1 00:31:44.003 Min: 1 00:31:44.003 Number of Namespaces: 0 00:31:44.003 Compare Command: Not Supported 00:31:44.003 Write Uncorrectable Command: Not Supported 00:31:44.003 Dataset Management Command: Not Supported 00:31:44.003 Write Zeroes Command: Not Supported 00:31:44.003 Set Features Save Field: Not Supported 00:31:44.003 Reservations: Not Supported 00:31:44.003 Timestamp: Not Supported 00:31:44.003 Copy: Not Supported 00:31:44.003 Volatile Write Cache: Not Present 00:31:44.003 Atomic Write Unit (Normal): 1 00:31:44.003 Atomic Write Unit (PFail): 1 00:31:44.003 Atomic Compare & Write Unit: 1 00:31:44.003 Fused Compare & Write: Not Supported 00:31:44.003 Scatter-Gather List 00:31:44.003 SGL Command Set: Supported 00:31:44.003 SGL Keyed: Not Supported 00:31:44.003 SGL Bit Bucket Descriptor: Not Supported 00:31:44.003 SGL Metadata Pointer: Not Supported 00:31:44.003 Oversized SGL: Not Supported 00:31:44.003 SGL Metadata Address: Not Supported 00:31:44.003 SGL Offset: Supported 00:31:44.003 Transport SGL Data Block: Not Supported 00:31:44.003 Replay Protected Memory Block: Not Supported 00:31:44.003 00:31:44.003 Firmware Slot Information 00:31:44.003 ========================= 00:31:44.003 Active slot: 0 00:31:44.003 00:31:44.003 00:31:44.003 Error Log 00:31:44.003 ========= 00:31:44.003 00:31:44.003 Active Namespaces 00:31:44.003 ================= 00:31:44.003 Discovery Log Page 00:31:44.003 ================== 00:31:44.003 Generation Counter: 2 00:31:44.003 Number of Records: 2 00:31:44.003 Record Format: 0 00:31:44.003 00:31:44.003 Discovery Log Entry 0 00:31:44.003 ---------------------- 00:31:44.003 Transport Type: 3 (TCP) 00:31:44.003 Address Family: 1 (IPv4) 00:31:44.003 Subsystem Type: 3 (Current Discovery Subsystem) 00:31:44.003 Entry Flags: 00:31:44.003 Duplicate Returned Information: 0 00:31:44.003 Explicit Persistent Connection Support for Discovery: 0 00:31:44.003 Transport Requirements: 00:31:44.003 Secure Channel: Not Specified 00:31:44.003 Port ID: 1 (0x0001) 00:31:44.003 Controller ID: 65535 (0xffff) 00:31:44.003 Admin Max SQ Size: 32 00:31:44.003 Transport Service Identifier: 4420 00:31:44.003 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:31:44.003 Transport Address: 10.0.0.1 00:31:44.003 Discovery Log Entry 1 00:31:44.003 ---------------------- 00:31:44.003 Transport Type: 3 (TCP) 00:31:44.003 Address Family: 1 (IPv4) 00:31:44.003 Subsystem Type: 2 (NVM Subsystem) 00:31:44.003 Entry Flags: 00:31:44.003 Duplicate Returned Information: 0 00:31:44.003 Explicit Persistent Connection Support for Discovery: 0 00:31:44.003 Transport Requirements: 00:31:44.003 Secure Channel: Not Specified 00:31:44.003 Port ID: 1 (0x0001) 00:31:44.003 Controller ID: 65535 (0xffff) 00:31:44.003 Admin Max SQ Size: 32 00:31:44.003 Transport Service Identifier: 4420 00:31:44.003 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:31:44.003 Transport Address: 10.0.0.1 00:31:44.003 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:44.003 EAL: No free 2048 kB hugepages reported on node 1 00:31:44.262 get_feature(0x01) failed 00:31:44.262 get_feature(0x02) failed 00:31:44.262 get_feature(0x04) failed 00:31:44.262 ===================================================== 00:31:44.262 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:44.262 ===================================================== 00:31:44.262 Controller Capabilities/Features 00:31:44.262 ================================ 00:31:44.263 Vendor ID: 0000 00:31:44.263 Subsystem Vendor ID: 0000 00:31:44.263 Serial Number: 7ebd6b9a185ad7608958 00:31:44.263 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:31:44.263 Firmware Version: 6.7.0-68 00:31:44.263 Recommended Arb Burst: 6 00:31:44.263 IEEE OUI Identifier: 00 00 00 00:31:44.263 Multi-path I/O 00:31:44.263 May have multiple subsystem ports: Yes 00:31:44.263 May have multiple controllers: Yes 00:31:44.263 Associated with SR-IOV VF: No 00:31:44.263 Max Data Transfer Size: Unlimited 00:31:44.263 Max Number of Namespaces: 1024 00:31:44.263 Max Number of I/O Queues: 128 00:31:44.263 NVMe Specification Version (VS): 1.3 00:31:44.263 NVMe Specification Version (Identify): 1.3 00:31:44.263 Maximum Queue Entries: 1024 00:31:44.263 Contiguous Queues Required: No 00:31:44.263 Arbitration Mechanisms Supported 00:31:44.263 Weighted Round Robin: Not Supported 00:31:44.263 Vendor Specific: Not Supported 00:31:44.263 Reset Timeout: 7500 ms 00:31:44.263 Doorbell Stride: 4 bytes 00:31:44.263 NVM Subsystem Reset: Not Supported 00:31:44.263 Command Sets Supported 00:31:44.263 NVM Command Set: Supported 00:31:44.263 Boot Partition: Not Supported 00:31:44.263 Memory Page Size Minimum: 4096 bytes 00:31:44.263 Memory Page Size Maximum: 4096 bytes 00:31:44.263 Persistent Memory Region: Not Supported 00:31:44.263 Optional Asynchronous Events Supported 00:31:44.263 Namespace Attribute Notices: Supported 00:31:44.263 Firmware Activation Notices: Not Supported 00:31:44.263 ANA Change Notices: Supported 00:31:44.263 PLE Aggregate Log Change Notices: Not Supported 00:31:44.263 LBA Status Info Alert Notices: Not Supported 00:31:44.263 EGE Aggregate Log Change Notices: Not Supported 00:31:44.263 Normal NVM Subsystem Shutdown event: Not Supported 00:31:44.263 Zone Descriptor Change Notices: Not Supported 00:31:44.263 Discovery Log Change Notices: Not Supported 00:31:44.263 Controller Attributes 00:31:44.263 128-bit Host Identifier: Supported 00:31:44.263 Non-Operational Permissive Mode: Not Supported 00:31:44.263 NVM Sets: Not Supported 00:31:44.263 Read Recovery Levels: Not Supported 00:31:44.263 Endurance Groups: Not Supported 00:31:44.263 Predictable Latency Mode: Not Supported 00:31:44.263 Traffic Based Keep ALive: Supported 00:31:44.263 Namespace Granularity: Not Supported 00:31:44.263 SQ Associations: Not Supported 00:31:44.263 UUID List: Not Supported 00:31:44.263 Multi-Domain Subsystem: Not Supported 00:31:44.263 Fixed Capacity Management: Not Supported 00:31:44.263 Variable Capacity Management: Not Supported 00:31:44.263 Delete Endurance Group: Not Supported 00:31:44.263 Delete NVM Set: Not Supported 00:31:44.263 Extended LBA Formats Supported: Not Supported 00:31:44.263 Flexible Data Placement Supported: Not Supported 00:31:44.263 00:31:44.263 Controller Memory Buffer Support 00:31:44.263 ================================ 00:31:44.263 Supported: No 00:31:44.263 00:31:44.263 Persistent Memory Region Support 00:31:44.263 ================================ 00:31:44.263 Supported: No 00:31:44.263 00:31:44.263 Admin Command Set Attributes 00:31:44.263 ============================ 00:31:44.263 Security Send/Receive: Not Supported 00:31:44.263 Format NVM: Not Supported 00:31:44.263 Firmware Activate/Download: Not Supported 00:31:44.263 Namespace Management: Not Supported 00:31:44.263 Device Self-Test: Not Supported 00:31:44.263 Directives: Not Supported 00:31:44.263 NVMe-MI: Not Supported 00:31:44.263 Virtualization Management: Not Supported 00:31:44.263 Doorbell Buffer Config: Not Supported 00:31:44.263 Get LBA Status Capability: Not Supported 00:31:44.263 Command & Feature Lockdown Capability: Not Supported 00:31:44.263 Abort Command Limit: 4 00:31:44.263 Async Event Request Limit: 4 00:31:44.263 Number of Firmware Slots: N/A 00:31:44.263 Firmware Slot 1 Read-Only: N/A 00:31:44.263 Firmware Activation Without Reset: N/A 00:31:44.263 Multiple Update Detection Support: N/A 00:31:44.263 Firmware Update Granularity: No Information Provided 00:31:44.263 Per-Namespace SMART Log: Yes 00:31:44.263 Asymmetric Namespace Access Log Page: Supported 00:31:44.263 ANA Transition Time : 10 sec 00:31:44.263 00:31:44.263 Asymmetric Namespace Access Capabilities 00:31:44.263 ANA Optimized State : Supported 00:31:44.263 ANA Non-Optimized State : Supported 00:31:44.263 ANA Inaccessible State : Supported 00:31:44.263 ANA Persistent Loss State : Supported 00:31:44.263 ANA Change State : Supported 00:31:44.263 ANAGRPID is not changed : No 00:31:44.263 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:31:44.263 00:31:44.263 ANA Group Identifier Maximum : 128 00:31:44.263 Number of ANA Group Identifiers : 128 00:31:44.263 Max Number of Allowed Namespaces : 1024 00:31:44.263 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:31:44.263 Command Effects Log Page: Supported 00:31:44.263 Get Log Page Extended Data: Supported 00:31:44.263 Telemetry Log Pages: Not Supported 00:31:44.263 Persistent Event Log Pages: Not Supported 00:31:44.263 Supported Log Pages Log Page: May Support 00:31:44.263 Commands Supported & Effects Log Page: Not Supported 00:31:44.263 Feature Identifiers & Effects Log Page:May Support 00:31:44.263 NVMe-MI Commands & Effects Log Page: May Support 00:31:44.263 Data Area 4 for Telemetry Log: Not Supported 00:31:44.263 Error Log Page Entries Supported: 128 00:31:44.263 Keep Alive: Supported 00:31:44.263 Keep Alive Granularity: 1000 ms 00:31:44.263 00:31:44.263 NVM Command Set Attributes 00:31:44.263 ========================== 00:31:44.263 Submission Queue Entry Size 00:31:44.263 Max: 64 00:31:44.263 Min: 64 00:31:44.263 Completion Queue Entry Size 00:31:44.263 Max: 16 00:31:44.263 Min: 16 00:31:44.263 Number of Namespaces: 1024 00:31:44.263 Compare Command: Not Supported 00:31:44.263 Write Uncorrectable Command: Not Supported 00:31:44.263 Dataset Management Command: Supported 00:31:44.263 Write Zeroes Command: Supported 00:31:44.263 Set Features Save Field: Not Supported 00:31:44.263 Reservations: Not Supported 00:31:44.263 Timestamp: Not Supported 00:31:44.263 Copy: Not Supported 00:31:44.263 Volatile Write Cache: Present 00:31:44.263 Atomic Write Unit (Normal): 1 00:31:44.263 Atomic Write Unit (PFail): 1 00:31:44.263 Atomic Compare & Write Unit: 1 00:31:44.263 Fused Compare & Write: Not Supported 00:31:44.263 Scatter-Gather List 00:31:44.263 SGL Command Set: Supported 00:31:44.263 SGL Keyed: Not Supported 00:31:44.263 SGL Bit Bucket Descriptor: Not Supported 00:31:44.263 SGL Metadata Pointer: Not Supported 00:31:44.263 Oversized SGL: Not Supported 00:31:44.263 SGL Metadata Address: Not Supported 00:31:44.263 SGL Offset: Supported 00:31:44.263 Transport SGL Data Block: Not Supported 00:31:44.263 Replay Protected Memory Block: Not Supported 00:31:44.263 00:31:44.263 Firmware Slot Information 00:31:44.263 ========================= 00:31:44.263 Active slot: 0 00:31:44.263 00:31:44.263 Asymmetric Namespace Access 00:31:44.263 =========================== 00:31:44.263 Change Count : 0 00:31:44.263 Number of ANA Group Descriptors : 1 00:31:44.263 ANA Group Descriptor : 0 00:31:44.263 ANA Group ID : 1 00:31:44.263 Number of NSID Values : 1 00:31:44.263 Change Count : 0 00:31:44.263 ANA State : 1 00:31:44.263 Namespace Identifier : 1 00:31:44.263 00:31:44.263 Commands Supported and Effects 00:31:44.263 ============================== 00:31:44.263 Admin Commands 00:31:44.263 -------------- 00:31:44.263 Get Log Page (02h): Supported 00:31:44.263 Identify (06h): Supported 00:31:44.263 Abort (08h): Supported 00:31:44.263 Set Features (09h): Supported 00:31:44.263 Get Features (0Ah): Supported 00:31:44.263 Asynchronous Event Request (0Ch): Supported 00:31:44.263 Keep Alive (18h): Supported 00:31:44.263 I/O Commands 00:31:44.263 ------------ 00:31:44.263 Flush (00h): Supported 00:31:44.263 Write (01h): Supported LBA-Change 00:31:44.263 Read (02h): Supported 00:31:44.263 Write Zeroes (08h): Supported LBA-Change 00:31:44.263 Dataset Management (09h): Supported 00:31:44.263 00:31:44.263 Error Log 00:31:44.263 ========= 00:31:44.263 Entry: 0 00:31:44.263 Error Count: 0x3 00:31:44.263 Submission Queue Id: 0x0 00:31:44.263 Command Id: 0x5 00:31:44.263 Phase Bit: 0 00:31:44.263 Status Code: 0x2 00:31:44.263 Status Code Type: 0x0 00:31:44.263 Do Not Retry: 1 00:31:44.263 Error Location: 0x28 00:31:44.264 LBA: 0x0 00:31:44.264 Namespace: 0x0 00:31:44.264 Vendor Log Page: 0x0 00:31:44.264 ----------- 00:31:44.264 Entry: 1 00:31:44.264 Error Count: 0x2 00:31:44.264 Submission Queue Id: 0x0 00:31:44.264 Command Id: 0x5 00:31:44.264 Phase Bit: 0 00:31:44.264 Status Code: 0x2 00:31:44.264 Status Code Type: 0x0 00:31:44.264 Do Not Retry: 1 00:31:44.264 Error Location: 0x28 00:31:44.264 LBA: 0x0 00:31:44.264 Namespace: 0x0 00:31:44.264 Vendor Log Page: 0x0 00:31:44.264 ----------- 00:31:44.264 Entry: 2 00:31:44.264 Error Count: 0x1 00:31:44.264 Submission Queue Id: 0x0 00:31:44.264 Command Id: 0x4 00:31:44.264 Phase Bit: 0 00:31:44.264 Status Code: 0x2 00:31:44.264 Status Code Type: 0x0 00:31:44.264 Do Not Retry: 1 00:31:44.264 Error Location: 0x28 00:31:44.264 LBA: 0x0 00:31:44.264 Namespace: 0x0 00:31:44.264 Vendor Log Page: 0x0 00:31:44.264 00:31:44.264 Number of Queues 00:31:44.264 ================ 00:31:44.264 Number of I/O Submission Queues: 128 00:31:44.264 Number of I/O Completion Queues: 128 00:31:44.264 00:31:44.264 ZNS Specific Controller Data 00:31:44.264 ============================ 00:31:44.264 Zone Append Size Limit: 0 00:31:44.264 00:31:44.264 00:31:44.264 Active Namespaces 00:31:44.264 ================= 00:31:44.264 get_feature(0x05) failed 00:31:44.264 Namespace ID:1 00:31:44.264 Command Set Identifier: NVM (00h) 00:31:44.264 Deallocate: Supported 00:31:44.264 Deallocated/Unwritten Error: Not Supported 00:31:44.264 Deallocated Read Value: Unknown 00:31:44.264 Deallocate in Write Zeroes: Not Supported 00:31:44.264 Deallocated Guard Field: 0xFFFF 00:31:44.264 Flush: Supported 00:31:44.264 Reservation: Not Supported 00:31:44.264 Namespace Sharing Capabilities: Multiple Controllers 00:31:44.264 Size (in LBAs): 1953525168 (931GiB) 00:31:44.264 Capacity (in LBAs): 1953525168 (931GiB) 00:31:44.264 Utilization (in LBAs): 1953525168 (931GiB) 00:31:44.264 UUID: e5cadfea-0083-44a7-8b49-d3eaf261e9bf 00:31:44.264 Thin Provisioning: Not Supported 00:31:44.264 Per-NS Atomic Units: Yes 00:31:44.264 Atomic Boundary Size (Normal): 0 00:31:44.264 Atomic Boundary Size (PFail): 0 00:31:44.264 Atomic Boundary Offset: 0 00:31:44.264 NGUID/EUI64 Never Reused: No 00:31:44.264 ANA group ID: 1 00:31:44.264 Namespace Write Protected: No 00:31:44.264 Number of LBA Formats: 1 00:31:44.264 Current LBA Format: LBA Format #00 00:31:44.264 LBA Format #00: Data Size: 512 Metadata Size: 0 00:31:44.264 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:44.264 rmmod nvme_tcp 00:31:44.264 rmmod nvme_fabrics 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:44.264 09:17:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:46.164 09:17:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:47.538 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:47.538 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:47.538 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:48.473 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:31:48.758 00:31:48.758 real 0m9.360s 00:31:48.758 user 0m1.971s 00:31:48.758 sys 0m3.340s 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:31:48.758 ************************************ 00:31:48.758 END TEST nvmf_identify_kernel_target 00:31:48.758 ************************************ 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.758 ************************************ 00:31:48.758 START TEST nvmf_auth_host 00:31:48.758 ************************************ 00:31:48.758 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:31:48.758 * Looking for test storage... 00:31:48.758 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:31:48.759 09:17:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:50.661 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:50.661 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:50.661 Found net devices under 0000:09:00.0: cvl_0_0 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:50.661 Found net devices under 0000:09:00.1: cvl_0_1 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:50.661 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:50.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:50.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:31:50.920 00:31:50.920 --- 10.0.0.2 ping statistics --- 00:31:50.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.920 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:50.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:50.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:31:50.920 00:31:50.920 --- 10.0.0.1 ping statistics --- 00:31:50.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:50.920 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3905267 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3905267 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3905267 ']' 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:50.920 09:17:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=de83647c272ec37fdf0740ffe17750f6 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Kxb 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key de83647c272ec37fdf0740ffe17750f6 0 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 de83647c272ec37fdf0740ffe17750f6 0 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=de83647c272ec37fdf0740ffe17750f6 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Kxb 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Kxb 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Kxb 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b6955d39f3662f772c8fc848aa4bf76261771746e61289b6471c69092379e3df 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.cdi 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b6955d39f3662f772c8fc848aa4bf76261771746e61289b6471c69092379e3df 3 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b6955d39f3662f772c8fc848aa4bf76261771746e61289b6471c69092379e3df 3 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b6955d39f3662f772c8fc848aa4bf76261771746e61289b6471c69092379e3df 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:51.179 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.cdi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.cdi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.cdi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dcaf5fd48ae4d9e6abfab24e75430ffa843a02bd53a862ae 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.RHi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dcaf5fd48ae4d9e6abfab24e75430ffa843a02bd53a862ae 0 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dcaf5fd48ae4d9e6abfab24e75430ffa843a02bd53a862ae 0 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dcaf5fd48ae4d9e6abfab24e75430ffa843a02bd53a862ae 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.RHi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.RHi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.RHi 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=272ed14f42a41f3f317b5951fa0d5ae956cbf99c9d404ed2 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.r5W 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 272ed14f42a41f3f317b5951fa0d5ae956cbf99c9d404ed2 2 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 272ed14f42a41f3f317b5951fa0d5ae956cbf99c9d404ed2 2 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=272ed14f42a41f3f317b5951fa0d5ae956cbf99c9d404ed2 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.r5W 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.r5W 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.r5W 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1a39de2cb262097fd0fcc154a17bc998 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:51.438 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.B2p 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1a39de2cb262097fd0fcc154a17bc998 1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1a39de2cb262097fd0fcc154a17bc998 1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1a39de2cb262097fd0fcc154a17bc998 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.B2p 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.B2p 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.B2p 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=57e97eb5120d1d006e325b3093251572 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.blK 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 57e97eb5120d1d006e325b3093251572 1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 57e97eb5120d1d006e325b3093251572 1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=57e97eb5120d1d006e325b3093251572 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.blK 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.blK 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.blK 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4821e99cb85b040b52436df305e2a1f1f28babac376b7514 00:31:51.439 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ZGr 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4821e99cb85b040b52436df305e2a1f1f28babac376b7514 2 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4821e99cb85b040b52436df305e2a1f1f28babac376b7514 2 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4821e99cb85b040b52436df305e2a1f1f28babac376b7514 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ZGr 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ZGr 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ZGr 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9812b702c03f1d409eb5ce358ca6120d 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.aJJ 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9812b702c03f1d409eb5ce358ca6120d 0 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9812b702c03f1d409eb5ce358ca6120d 0 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9812b702c03f1d409eb5ce358ca6120d 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.aJJ 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.aJJ 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.aJJ 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=14f62b14384abcb7f1bb2cc119f19138279b1bb0d5aa2ff7e686033145578c3c 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.0QU 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 14f62b14384abcb7f1bb2cc119f19138279b1bb0d5aa2ff7e686033145578c3c 3 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 14f62b14384abcb7f1bb2cc119f19138279b1bb0d5aa2ff7e686033145578c3c 3 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=14f62b14384abcb7f1bb2cc119f19138279b1bb0d5aa2ff7e686033145578c3c 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.0QU 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.0QU 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.0QU 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3905267 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3905267 ']' 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:51.697 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Kxb 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.cdi ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.cdi 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.RHi 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.r5W ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.r5W 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.B2p 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.blK ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.blK 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ZGr 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.aJJ ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.aJJ 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.0QU 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.957 09:17:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:51.957 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.957 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:31:51.957 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:31:51.957 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:31:51.957 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:31:51.958 09:17:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:52.930 Waiting for block devices as requested 00:31:52.930 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:53.188 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:53.188 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:53.188 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:53.446 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:53.446 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:53.446 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:53.446 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:53.704 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:31:53.704 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:53.962 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:53.962 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:53.962 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:53.962 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:54.221 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:54.221 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:54.221 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:54.788 No valid GPT data, bailing 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:31:54.788 00:31:54.788 Discovery Log Number of Records 2, Generation counter 2 00:31:54.788 =====Discovery Log Entry 0====== 00:31:54.788 trtype: tcp 00:31:54.788 adrfam: ipv4 00:31:54.788 subtype: current discovery subsystem 00:31:54.788 treq: not specified, sq flow control disable supported 00:31:54.788 portid: 1 00:31:54.788 trsvcid: 4420 00:31:54.788 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:54.788 traddr: 10.0.0.1 00:31:54.788 eflags: none 00:31:54.788 sectype: none 00:31:54.788 =====Discovery Log Entry 1====== 00:31:54.788 trtype: tcp 00:31:54.788 adrfam: ipv4 00:31:54.788 subtype: nvme subsystem 00:31:54.788 treq: not specified, sq flow control disable supported 00:31:54.788 portid: 1 00:31:54.788 trsvcid: 4420 00:31:54.788 subnqn: nqn.2024-02.io.spdk:cnode0 00:31:54.788 traddr: 10.0.0.1 00:31:54.788 eflags: none 00:31:54.788 sectype: none 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:54.788 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:54.789 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:31:54.789 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:54.789 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:54.789 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.047 09:17:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 nvme0n1 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.047 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.305 nvme0n1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.305 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.562 nvme0n1 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:31:55.562 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.563 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.821 nvme0n1 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.821 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.080 nvme0n1 00:31:56.080 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.080 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.080 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.080 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.080 09:17:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.080 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.338 nvme0n1 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.338 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.339 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.339 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:56.339 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.339 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.597 nvme0n1 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.597 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:56.598 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.598 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.856 nvme0n1 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:56.856 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:56.857 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:56.857 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:56.857 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.857 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.115 nvme0n1 00:31:57.115 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.115 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.115 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.115 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.115 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.115 09:17:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.115 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.374 nvme0n1 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.374 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.632 nvme0n1 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.632 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.633 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.890 nvme0n1 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.890 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.891 09:17:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.148 nvme0n1 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.148 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.406 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.407 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:31:58.407 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.407 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.665 nvme0n1 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.665 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.923 nvme0n1 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:58.923 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.924 09:17:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.182 nvme0n1 00:31:59.182 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.182 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:31:59.182 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.182 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.182 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:31:59.182 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.440 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.006 nvme0n1 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.006 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.007 09:17:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.573 nvme0n1 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:00.573 09:17:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.139 nvme0n1 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.139 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.140 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.706 nvme0n1 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:01.706 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:01.707 09:17:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.273 nvme0n1 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:02.273 09:17:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.206 nvme0n1 00:32:03.206 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.206 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:03.206 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:03.206 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.206 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:03.464 09:17:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.397 nvme0n1 00:32:04.397 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.397 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:04.397 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:04.397 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.397 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:04.398 09:17:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.329 nvme0n1 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:05.329 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.330 09:17:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.701 nvme0n1 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:06.701 09:17:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.634 nvme0n1 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.634 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.635 nvme0n1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.635 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.893 nvme0n1 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:07.893 09:17:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.151 nvme0n1 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:08.151 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.152 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.409 nvme0n1 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:08.409 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.410 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.667 nvme0n1 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.667 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.668 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.925 nvme0n1 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:08.925 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:08.926 09:17:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.183 nvme0n1 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.183 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.184 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.442 nvme0n1 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.442 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.700 nvme0n1 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.700 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.958 nvme0n1 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:09.958 09:17:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.264 nvme0n1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.264 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.522 nvme0n1 00:32:10.522 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.522 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:10.522 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:10.522 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.522 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.522 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:10.779 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:10.780 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.037 nvme0n1 00:32:11.037 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.037 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.037 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.037 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.037 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.037 09:17:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.037 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.038 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.295 nvme0n1 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.295 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.296 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.862 nvme0n1 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.862 09:17:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.428 nvme0n1 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.428 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.429 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.429 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.429 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:12.429 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.429 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.995 nvme0n1 00:32:12.995 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.995 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:12.995 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:12.995 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.995 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:12.996 09:17:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.562 nvme0n1 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.562 09:17:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.127 nvme0n1 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.128 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.694 nvme0n1 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:14.694 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.695 09:17:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.630 nvme0n1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.630 09:17:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.562 nvme0n1 00:32:16.562 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.562 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:16.562 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.562 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.562 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:16.562 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:16.820 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.821 09:17:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.753 nvme0n1 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.753 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.754 09:17:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.709 nvme0n1 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.709 09:17:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.644 nvme0n1 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.644 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.645 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.904 nvme0n1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.904 09:17:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.162 nvme0n1 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.162 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.163 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.421 nvme0n1 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.421 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.680 nvme0n1 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.680 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.939 nvme0n1 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.939 09:17:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.198 nvme0n1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.198 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.457 nvme0n1 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.457 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.715 nvme0n1 00:32:21.715 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.716 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 nvme0n1 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.975 09:17:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 nvme0n1 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:22.233 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.234 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.492 nvme0n1 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.492 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.493 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.751 nvme0n1 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.751 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.009 09:18:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.267 nvme0n1 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.267 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.525 nvme0n1 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.525 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.526 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.526 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:23.526 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.526 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.091 nvme0n1 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.091 09:18:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.658 nvme0n1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.658 09:18:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.223 nvme0n1 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.223 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.224 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.788 nvme0n1 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.788 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.789 09:18:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.354 nvme0n1 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:26.354 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.355 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.920 nvme0n1 00:32:26.920 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.920 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.920 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.920 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.921 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.921 09:18:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:26.921 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGU4MzY0N2MyNzJlYzM3ZmRmMDc0MGZmZTE3NzUwZjZE9AGh: 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: ]] 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjY5NTVkMzlmMzY2MmY3NzJjOGZjODQ4YWE0YmY3NjI2MTc3MTc0NmU2MTI4OWI2NDcxYzY5MDkyMzc5ZTNkZpACP38=: 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.179 09:18:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.115 nvme0n1 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.115 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.116 09:18:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.091 nvme0n1 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MWEzOWRlMmNiMjYyMDk3ZmQwZmNjMTU0YTE3YmM5OTgD09Bx: 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTdlOTdlYjUxMjBkMWQwMDZlMzI1YjMwOTMyNTE1NzIp+q60: 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.091 09:18:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.025 nvme0n1 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NDgyMWU5OWNiODViMDQwYjUyNDM2ZGYzMDVlMmExZjFmMjhiYWJhYzM3NmI3NTE0smAJQA==: 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: ]] 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTgxMmI3MDJjMDNmMWQ0MDllYjVjZTM1OGNhNjEyMGQ1Wmem: 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.025 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:30.026 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.026 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.284 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.220 nvme0n1 00:32:31.220 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.220 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.220 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.220 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.220 09:18:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MTRmNjJiMTQzODRhYmNiN2YxYmIyY2MxMTlmMTkxMzgyNzliMWJiMGQ1YWEyZmY3ZTY4NjAzMzE0NTU3OGMzY4NtKig=: 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.220 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.785 nvme0n1 00:32:31.785 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.785 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.785 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.785 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.785 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGNhZjVmZDQ4YWU0ZDllNmFiZmFiMjRlNzU0MzBmZmE4NDNhMDJiZDUzYTg2MmFlGJ88HA==: 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MjcyZWQxNGY0MmE0MWYzZjMxN2I1OTUxZmEwZDVhZTk1NmNiZjk5YzlkNDA0ZWQytsCe+Q==: 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.043 09:18:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.043 request: 00:32:32.043 { 00:32:32.043 "name": "nvme0", 00:32:32.043 "trtype": "tcp", 00:32:32.043 "traddr": "10.0.0.1", 00:32:32.043 "adrfam": "ipv4", 00:32:32.043 "trsvcid": "4420", 00:32:32.043 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.044 "prchk_reftag": false, 00:32:32.044 "prchk_guard": false, 00:32:32.044 "hdgst": false, 00:32:32.044 "ddgst": false, 00:32:32.044 "method": "bdev_nvme_attach_controller", 00:32:32.044 "req_id": 1 00:32:32.044 } 00:32:32.044 Got JSON-RPC error response 00:32:32.044 response: 00:32:32.044 { 00:32:32.044 "code": -5, 00:32:32.044 "message": "Input/output error" 00:32:32.044 } 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.044 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.044 request: 00:32:32.044 { 00:32:32.044 "name": "nvme0", 00:32:32.044 "trtype": "tcp", 00:32:32.044 "traddr": "10.0.0.1", 00:32:32.044 "adrfam": "ipv4", 00:32:32.044 "trsvcid": "4420", 00:32:32.044 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.044 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.044 "prchk_reftag": false, 00:32:32.044 "prchk_guard": false, 00:32:32.044 "hdgst": false, 00:32:32.044 "ddgst": false, 00:32:32.044 "dhchap_key": "key2", 00:32:32.044 "method": "bdev_nvme_attach_controller", 00:32:32.044 "req_id": 1 00:32:32.044 } 00:32:32.044 Got JSON-RPC error response 00:32:32.044 response: 00:32:32.044 { 00:32:32.044 "code": -5, 00:32:32.044 "message": "Input/output error" 00:32:32.044 } 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.302 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.302 request: 00:32:32.302 { 00:32:32.302 "name": "nvme0", 00:32:32.302 "trtype": "tcp", 00:32:32.302 "traddr": "10.0.0.1", 00:32:32.302 "adrfam": "ipv4", 00:32:32.302 "trsvcid": "4420", 00:32:32.302 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:32.302 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:32.302 "prchk_reftag": false, 00:32:32.302 "prchk_guard": false, 00:32:32.302 "hdgst": false, 00:32:32.302 "ddgst": false, 00:32:32.303 "dhchap_key": "key1", 00:32:32.303 "dhchap_ctrlr_key": "ckey2", 00:32:32.303 "method": "bdev_nvme_attach_controller", 00:32:32.303 "req_id": 1 00:32:32.303 } 00:32:32.303 Got JSON-RPC error response 00:32:32.303 response: 00:32:32.303 { 00:32:32.303 "code": -5, 00:32:32.303 "message": "Input/output error" 00:32:32.303 } 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:32.303 rmmod nvme_tcp 00:32:32.303 rmmod nvme_fabrics 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3905267 ']' 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3905267 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3905267 ']' 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3905267 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3905267 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3905267' 00:32:32.303 killing process with pid 3905267 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3905267 00:32:32.303 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3905267 00:32:32.561 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.562 09:18:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:35.095 09:18:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:35.673 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:35.932 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:35.932 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:36.869 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:32:36.869 09:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Kxb /tmp/spdk.key-null.RHi /tmp/spdk.key-sha256.B2p /tmp/spdk.key-sha384.ZGr /tmp/spdk.key-sha512.0QU /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:32:36.869 09:18:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:38.245 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:38.245 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:38.245 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:38.245 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:38.245 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:38.245 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:38.245 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:38.245 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:38.245 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:32:38.245 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:32:38.245 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:32:38.245 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:32:38.245 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:32:38.245 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:32:38.245 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:32:38.245 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:32:38.245 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:32:38.245 00:32:38.245 real 0m49.640s 00:32:38.245 user 0m46.882s 00:32:38.245 sys 0m5.810s 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.245 ************************************ 00:32:38.245 END TEST nvmf_auth_host 00:32:38.245 ************************************ 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.245 ************************************ 00:32:38.245 START TEST nvmf_digest 00:32:38.245 ************************************ 00:32:38.245 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:32:38.503 * Looking for test storage... 00:32:38.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:38.503 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:32:38.504 09:18:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:40.405 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:40.406 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:40.406 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:40.406 Found net devices under 0000:09:00.0: cvl_0_0 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:40.406 Found net devices under 0000:09:00.1: cvl_0_1 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:40.406 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:40.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:40.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:32:40.665 00:32:40.665 --- 10.0.0.2 ping statistics --- 00:32:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.665 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:40.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:40.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:32:40.665 00:32:40.665 --- 10.0.0.1 ping statistics --- 00:32:40.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:40.665 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:40.665 ************************************ 00:32:40.665 START TEST nvmf_digest_clean 00:32:40.665 ************************************ 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3915409 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3915409 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3915409 ']' 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:40.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:40.665 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:40.665 [2024-07-24 09:18:18.644448] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:40.665 [2024-07-24 09:18:18.644521] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:40.665 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.665 [2024-07-24 09:18:18.682098] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:40.665 [2024-07-24 09:18:18.708261] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:40.923 [2024-07-24 09:18:18.790004] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:40.923 [2024-07-24 09:18:18.790050] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:40.923 [2024-07-24 09:18:18.790063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:40.923 [2024-07-24 09:18:18.790074] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:40.923 [2024-07-24 09:18:18.790083] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:40.924 [2024-07-24 09:18:18.790137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.924 09:18:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:40.924 null0 00:32:40.924 [2024-07-24 09:18:18.981874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:40.924 [2024-07-24 09:18:19.006108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3915434 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3915434 /var/tmp/bperf.sock 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3915434 ']' 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:40.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:40.924 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:41.182 [2024-07-24 09:18:19.055830] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:41.182 [2024-07-24 09:18:19.055898] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915434 ] 00:32:41.182 EAL: No free 2048 kB hugepages reported on node 1 00:32:41.182 [2024-07-24 09:18:19.089804] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:41.182 [2024-07-24 09:18:19.120280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:41.182 [2024-07-24 09:18:19.209643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:41.182 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:41.182 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:41.182 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:41.182 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:41.182 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:41.748 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:41.749 09:18:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:42.007 nvme0n1 00:32:42.007 09:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:42.007 09:18:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:42.265 Running I/O for 2 seconds... 00:32:44.164 00:32:44.164 Latency(us) 00:32:44.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.164 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:44.164 nvme0n1 : 2.05 17964.33 70.17 0.00 0.00 6974.51 3094.76 44661.57 00:32:44.164 =================================================================================================================== 00:32:44.164 Total : 17964.33 70.17 0.00 0.00 6974.51 3094.76 44661.57 00:32:44.164 0 00:32:44.164 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:44.164 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:44.164 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:44.164 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:44.164 | select(.opcode=="crc32c") 00:32:44.164 | "\(.module_name) \(.executed)"' 00:32:44.164 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3915434 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3915434 ']' 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3915434 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3915434 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3915434' 00:32:44.422 killing process with pid 3915434 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3915434 00:32:44.422 Received shutdown signal, test time was about 2.000000 seconds 00:32:44.422 00:32:44.422 Latency(us) 00:32:44.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.422 =================================================================================================================== 00:32:44.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:44.422 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3915434 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3915839 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3915839 /var/tmp/bperf.sock 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3915839 ']' 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:44.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:44.681 09:18:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:44.940 [2024-07-24 09:18:22.808249] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:44.940 [2024-07-24 09:18:22.808346] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3915839 ] 00:32:44.940 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:44.940 Zero copy mechanism will not be used. 00:32:44.940 EAL: No free 2048 kB hugepages reported on node 1 00:32:44.940 [2024-07-24 09:18:22.841540] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:44.940 [2024-07-24 09:18:22.873822] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.940 [2024-07-24 09:18:22.970558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:44.940 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:44.940 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:44.940 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:44.940 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:44.940 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:45.506 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.506 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:45.764 nvme0n1 00:32:45.764 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:45.764 09:18:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:45.764 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:45.764 Zero copy mechanism will not be used. 00:32:45.764 Running I/O for 2 seconds... 00:32:48.293 00:32:48.293 Latency(us) 00:32:48.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.293 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:32:48.293 nvme0n1 : 2.00 4054.36 506.80 0.00 0.00 3941.95 3495.25 11602.30 00:32:48.293 =================================================================================================================== 00:32:48.293 Total : 4054.36 506.80 0.00 0.00 3941.95 3495.25 11602.30 00:32:48.293 0 00:32:48.293 09:18:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:48.293 09:18:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:48.293 09:18:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:48.293 09:18:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:48.293 | select(.opcode=="crc32c") 00:32:48.293 | "\(.module_name) \(.executed)"' 00:32:48.293 09:18:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3915839 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3915839 ']' 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3915839 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:48.293 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3915839 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3915839' 00:32:48.294 killing process with pid 3915839 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3915839 00:32:48.294 Received shutdown signal, test time was about 2.000000 seconds 00:32:48.294 00:32:48.294 Latency(us) 00:32:48.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:48.294 =================================================================================================================== 00:32:48.294 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3915839 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3916251 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3916251 /var/tmp/bperf.sock 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3916251 ']' 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:48.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:48.294 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:48.294 [2024-07-24 09:18:26.365110] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:48.294 [2024-07-24 09:18:26.365207] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916251 ] 00:32:48.294 EAL: No free 2048 kB hugepages reported on node 1 00:32:48.294 [2024-07-24 09:18:26.398075] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:48.552 [2024-07-24 09:18:26.431487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.552 [2024-07-24 09:18:26.520221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:48.552 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:48.552 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:48.552 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:48.552 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:48.552 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:48.811 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:48.811 09:18:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:49.377 nvme0n1 00:32:49.377 09:18:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:49.377 09:18:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:49.635 Running I/O for 2 seconds... 00:32:51.619 00:32:51.619 Latency(us) 00:32:51.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.619 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.619 nvme0n1 : 2.00 20848.54 81.44 0.00 0.00 6130.18 2536.49 15437.37 00:32:51.619 =================================================================================================================== 00:32:51.619 Total : 20848.54 81.44 0.00 0.00 6130.18 2536.49 15437.37 00:32:51.619 0 00:32:51.619 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:51.619 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:51.619 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:51.619 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:51.619 | select(.opcode=="crc32c") 00:32:51.619 | "\(.module_name) \(.executed)"' 00:32:51.619 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3916251 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3916251 ']' 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3916251 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3916251 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3916251' 00:32:51.878 killing process with pid 3916251 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3916251 00:32:51.878 Received shutdown signal, test time was about 2.000000 seconds 00:32:51.878 00:32:51.878 Latency(us) 00:32:51.878 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.878 =================================================================================================================== 00:32:51.878 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:51.878 09:18:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3916251 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3916776 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3916776 /var/tmp/bperf.sock 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3916776 ']' 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:52.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:52.137 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:52.137 [2024-07-24 09:18:30.078638] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:52.137 [2024-07-24 09:18:30.078731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3916776 ] 00:32:52.137 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:52.137 Zero copy mechanism will not be used. 00:32:52.137 EAL: No free 2048 kB hugepages reported on node 1 00:32:52.137 [2024-07-24 09:18:30.110621] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:52.137 [2024-07-24 09:18:30.138013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.137 [2024-07-24 09:18:30.220456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.395 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:52.395 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:32:52.395 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:32:52.395 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:32:52.395 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:32:52.653 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:52.653 09:18:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:53.219 nvme0n1 00:32:53.219 09:18:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:32:53.219 09:18:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.219 I/O size of 131072 is greater than zero copy threshold (65536). 00:32:53.219 Zero copy mechanism will not be used. 00:32:53.219 Running I/O for 2 seconds... 00:32:55.746 00:32:55.746 Latency(us) 00:32:55.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.746 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:32:55.746 nvme0n1 : 2.00 4087.83 510.98 0.00 0.00 3904.75 3082.62 8980.86 00:32:55.746 =================================================================================================================== 00:32:55.746 Total : 4087.83 510.98 0.00 0.00 3904.75 3082.62 8980.86 00:32:55.746 0 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:32:55.746 | select(.opcode=="crc32c") 00:32:55.746 | "\(.module_name) \(.executed)"' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3916776 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3916776 ']' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3916776 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3916776 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3916776' 00:32:55.746 killing process with pid 3916776 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3916776 00:32:55.746 Received shutdown signal, test time was about 2.000000 seconds 00:32:55.746 00:32:55.746 Latency(us) 00:32:55.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:55.746 =================================================================================================================== 00:32:55.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3916776 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3915409 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3915409 ']' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3915409 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3915409 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3915409' 00:32:55.746 killing process with pid 3915409 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3915409 00:32:55.746 09:18:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3915409 00:32:56.005 00:32:56.005 real 0m15.457s 00:32:56.005 user 0m30.857s 00:32:56.005 sys 0m4.092s 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:32:56.005 ************************************ 00:32:56.005 END TEST nvmf_digest_clean 00:32:56.005 ************************************ 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:32:56.005 ************************************ 00:32:56.005 START TEST nvmf_digest_error 00:32:56.005 ************************************ 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3917216 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3917216 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3917216 ']' 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.005 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.264 [2024-07-24 09:18:34.149045] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:56.264 [2024-07-24 09:18:34.149164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.264 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.264 [2024-07-24 09:18:34.185258] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.264 [2024-07-24 09:18:34.212031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.264 [2024-07-24 09:18:34.291765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.264 [2024-07-24 09:18:34.291815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.264 [2024-07-24 09:18:34.291837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.264 [2024-07-24 09:18:34.291849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.264 [2024-07-24 09:18:34.291859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.264 [2024-07-24 09:18:34.291884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.264 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.523 [2024-07-24 09:18:34.380460] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.523 null0 00:32:56.523 [2024-07-24 09:18:34.486409] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.523 [2024-07-24 09:18:34.510639] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3917351 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3917351 /var/tmp/bperf.sock 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3917351 ']' 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:56.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:56.523 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:56.523 [2024-07-24 09:18:34.559597] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:32:56.523 [2024-07-24 09:18:34.559685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917351 ] 00:32:56.523 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.523 [2024-07-24 09:18:34.591733] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:32:56.523 [2024-07-24 09:18:34.619412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.782 [2024-07-24 09:18:34.703131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.782 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:56.782 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:32:56.782 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:56.782 09:18:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:32:57.040 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:32:57.040 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.040 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.040 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.040 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.040 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:32:57.607 nvme0n1 00:32:57.607 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:32:57.607 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.607 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:32:57.607 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.607 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:32:57.607 09:18:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:57.607 Running I/O for 2 seconds... 00:32:57.607 [2024-07-24 09:18:35.678471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.607 [2024-07-24 09:18:35.678527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.607 [2024-07-24 09:18:35.678550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.607 [2024-07-24 09:18:35.692950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.607 [2024-07-24 09:18:35.692988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.607 [2024-07-24 09:18:35.693008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.607 [2024-07-24 09:18:35.708952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.607 [2024-07-24 09:18:35.708987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.607 [2024-07-24 09:18:35.709007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.607 [2024-07-24 09:18:35.721569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.607 [2024-07-24 09:18:35.721607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.607 [2024-07-24 09:18:35.721628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.738372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.738424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.752907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.752943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.752962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.764404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.764440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.764459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.779211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.779242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.779259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.793432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.793468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.793487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.807323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.807353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.807369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.819597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.819632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.819651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.834486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.834522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.834547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.850147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.850188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.850204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.866200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.866240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.878238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.878269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.878286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.890724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.890753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.890769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.866 [2024-07-24 09:18:35.905119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.866 [2024-07-24 09:18:35.905151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.866 [2024-07-24 09:18:35.905168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.867 [2024-07-24 09:18:35.916310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.867 [2024-07-24 09:18:35.916341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.867 [2024-07-24 09:18:35.916358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.867 [2024-07-24 09:18:35.930690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.867 [2024-07-24 09:18:35.930722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.867 [2024-07-24 09:18:35.930740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.867 [2024-07-24 09:18:35.945350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.867 [2024-07-24 09:18:35.945382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:19668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.867 [2024-07-24 09:18:35.945415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.867 [2024-07-24 09:18:35.958491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.867 [2024-07-24 09:18:35.958529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.867 [2024-07-24 09:18:35.958547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:57.867 [2024-07-24 09:18:35.969535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:57.867 [2024-07-24 09:18:35.969565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:57.867 [2024-07-24 09:18:35.969582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:35.985014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:35.985046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:35.985067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:35.997343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:35.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:35.997407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.009985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.010015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.010031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.025933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.025964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.025981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.038017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.038048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.038066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.050181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.050210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.050226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.064978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.065009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.065026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.077217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.077249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.077283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.091885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.091918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.091936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.106618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.106651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.106668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.126 [2024-07-24 09:18:36.118328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.126 [2024-07-24 09:18:36.118360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.126 [2024-07-24 09:18:36.118392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.131738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.131770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.131787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.145681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.145712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:20771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.145730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.157964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.157995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:11816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.158012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.171245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.171275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.171292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.183920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.183951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.183975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.196023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.196051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.196067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.208855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.208885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:14127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.208901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.220841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.220870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.220886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.127 [2024-07-24 09:18:36.234123] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.127 [2024-07-24 09:18:36.234153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.127 [2024-07-24 09:18:36.234170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.249525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.249558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.249576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.260780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.260812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.260829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.277045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.277078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.277096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.291356] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.291388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.291406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.302957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.302986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.303003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.315290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.315322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.315339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.329977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.330008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.386 [2024-07-24 09:18:36.330026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.386 [2024-07-24 09:18:36.341219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.386 [2024-07-24 09:18:36.341250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.341267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.356618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.356664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.356681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.371230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.371263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.371281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.382391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.382421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.382452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.398571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.398600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.398616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.414333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.414362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.414400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.429167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.429197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.429214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.440172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.440201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.440217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.455275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.455304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.455320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.469935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.469965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.469998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.481535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.481579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.481596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.387 [2024-07-24 09:18:36.495753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.387 [2024-07-24 09:18:36.495784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.387 [2024-07-24 09:18:36.495801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.509926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.509973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.509990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.521856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.521885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.521901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.532884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.532919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.532936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.547777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.547809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.547827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.562471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.562503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.562536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.574591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.574623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.574640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.587547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.587576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.587593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.600229] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.600260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.600278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.611309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.611340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.611357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.624546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.646 [2024-07-24 09:18:36.624577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.646 [2024-07-24 09:18:36.624594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.646 [2024-07-24 09:18:36.639100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.639152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.639170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.651120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.651165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.651183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.665893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.665925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.665943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.680538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.680570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.680588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.691261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.691293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.691312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.706523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.706554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.706586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.721184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.721215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.721233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.733117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.733160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.733178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.748134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.748177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.748196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.647 [2024-07-24 09:18:36.759659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.647 [2024-07-24 09:18:36.759690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.647 [2024-07-24 09:18:36.759715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.775147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.775180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.775198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.790930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.790959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.790975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.806968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.807000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.820926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.820958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.820976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.832413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.832443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.832473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.847441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.847477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.847496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.862887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.862921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.862941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.875161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.875192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.875210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.892246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.892275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.892292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.908209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.908239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.908271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.922843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.922877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.922896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.934130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.934179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.934197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.948522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.948556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.948576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.962172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.962203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.962220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.974524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.974557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.974576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:36.988556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:36.988590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:19374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:36.988610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:37.002578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:37.002612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:37.002637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:58.906 [2024-07-24 09:18:37.018156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:58.906 [2024-07-24 09:18:37.018189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:58.906 [2024-07-24 09:18:37.018207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.031313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.031344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.045561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.045597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.045616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.061574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.061609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:8776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.061628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.076485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.076520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.076540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.088135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.088178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.088194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.104449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.104499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.104519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.119064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.119099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.119148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.136527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.136568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.136589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.149344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.149373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.149389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.165094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.165152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.165170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.180862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.180897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.180916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.192798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.192832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.192851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.209791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.209826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.227595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.227630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.227650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.243998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.244033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.244053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.255811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.255845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.255863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.165 [2024-07-24 09:18:37.271291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.165 [2024-07-24 09:18:37.271322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.165 [2024-07-24 09:18:37.271339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.286679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.286717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.286737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.299405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.299456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.299476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.313052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.313086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.313116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.326245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.326277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.326295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.339774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.339808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.339827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.354889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.354924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.354944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.368703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.368738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.368757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.380669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.380704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.380730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.397458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.397493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.397512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.412796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.412831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.412850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.426468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.426503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.426522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.440325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.440363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.440379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.424 [2024-07-24 09:18:37.453783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.424 [2024-07-24 09:18:37.453817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.424 [2024-07-24 09:18:37.453837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.425 [2024-07-24 09:18:37.470640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.425 [2024-07-24 09:18:37.470674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-24 09:18:37.470694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.425 [2024-07-24 09:18:37.486506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.425 [2024-07-24 09:18:37.486541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:3424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-24 09:18:37.486560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.425 [2024-07-24 09:18:37.500412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.425 [2024-07-24 09:18:37.500460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-24 09:18:37.500479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.425 [2024-07-24 09:18:37.513604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.425 [2024-07-24 09:18:37.513644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-24 09:18:37.513664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.425 [2024-07-24 09:18:37.525604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.425 [2024-07-24 09:18:37.525641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.425 [2024-07-24 09:18:37.525660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.541176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.541207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.541225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.555936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.555973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.555993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.574683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.574719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.574739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.586011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.586046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.586066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.601300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.601330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:9248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.601347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.616511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.616546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.616565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.632465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.632501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.632520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.645886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.645921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.645940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 [2024-07-24 09:18:37.662098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fee280) 00:32:59.683 [2024-07-24 09:18:37.662141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:59.683 [2024-07-24 09:18:37.662175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:59.683 00:32:59.683 Latency(us) 00:32:59.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.683 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:32:59.683 nvme0n1 : 2.00 18223.63 71.19 0.00 0.00 7014.86 3592.34 24758.04 00:32:59.683 =================================================================================================================== 00:32:59.683 Total : 18223.63 71.19 0.00 0.00 7014.86 3592.34 24758.04 00:32:59.683 0 00:32:59.683 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:32:59.683 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:32:59.684 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:32:59.684 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:32:59.684 | .driver_specific 00:32:59.684 | .nvme_error 00:32:59.684 | .status_code 00:32:59.684 | .command_transient_transport_error' 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3917351 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3917351 ']' 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3917351 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917351 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917351' 00:32:59.942 killing process with pid 3917351 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3917351 00:32:59.942 Received shutdown signal, test time was about 2.000000 seconds 00:32:59.942 00:32:59.942 Latency(us) 00:32:59.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:59.942 =================================================================================================================== 00:32:59.942 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:59.942 09:18:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3917351 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3917763 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3917763 /var/tmp/bperf.sock 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3917763 ']' 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:00.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:00.201 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.201 [2024-07-24 09:18:38.254329] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:00.201 [2024-07-24 09:18:38.254426] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3917763 ] 00:33:00.201 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:00.201 Zero copy mechanism will not be used. 00:33:00.201 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.201 [2024-07-24 09:18:38.285174] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:00.201 [2024-07-24 09:18:38.316389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.459 [2024-07-24 09:18:38.404018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:00.459 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:00.459 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:00.459 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:00.459 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:00.717 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:00.717 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.717 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:00.717 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.717 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:00.717 09:18:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:01.284 nvme0n1 00:33:01.284 09:18:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:01.284 09:18:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.284 09:18:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:01.284 09:18:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.284 09:18:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:01.284 09:18:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:01.284 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:01.284 Zero copy mechanism will not be used. 00:33:01.284 Running I/O for 2 seconds... 00:33:01.284 [2024-07-24 09:18:39.243075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.243136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.243157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.250940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.250973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.250991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.258575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.258606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.258623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.266247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.266277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.266294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.273780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.273815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.273834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.281494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.281524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.281541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.289331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.289362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.284 [2024-07-24 09:18:39.289396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.284 [2024-07-24 09:18:39.297412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.284 [2024-07-24 09:18:39.297444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.297461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.305005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.305039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.305058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.312594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.312629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.312648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.320636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.320670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.320689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.328493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.328527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.328546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.336202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.336243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.336261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.344095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.344137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.344172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.351948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.351993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.352015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.359835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.359870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.359889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.367524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.367559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.367578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.375247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.375278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.375294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.383021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.383055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.383073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.390912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.390947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.390966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.285 [2024-07-24 09:18:39.399025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.285 [2024-07-24 09:18:39.399063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.285 [2024-07-24 09:18:39.399084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.543 [2024-07-24 09:18:39.406991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.543 [2024-07-24 09:18:39.407028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.543 [2024-07-24 09:18:39.407047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.543 [2024-07-24 09:18:39.414868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.543 [2024-07-24 09:18:39.414903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.543 [2024-07-24 09:18:39.414923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.543 [2024-07-24 09:18:39.422776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.543 [2024-07-24 09:18:39.422811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.422830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.430545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.430579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.430598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.438403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.438438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.438457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.446463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.446498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.446517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.454352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.454400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.454420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.462593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.462628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.470527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.470561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.470580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.478493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.478527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.478547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.486340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.486370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.486392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.494186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.494216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.494232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.502441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.502475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.502494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.510230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.510260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.510277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.518263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.518293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.518310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.526239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.526271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.526288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.534124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.534172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.534190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.542045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.542078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.542097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.549907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.549951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.549966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.557974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.558013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.558033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.565625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.565659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.565677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.573560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.573594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.573614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.581474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.581506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.581525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.589282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.589311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.589332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.597025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.597058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.597077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.604906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.604940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.604960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.612853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.544 [2024-07-24 09:18:39.612887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.544 [2024-07-24 09:18:39.612906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.544 [2024-07-24 09:18:39.620679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.545 [2024-07-24 09:18:39.620713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.545 [2024-07-24 09:18:39.620732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.545 [2024-07-24 09:18:39.628561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.545 [2024-07-24 09:18:39.628595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.545 [2024-07-24 09:18:39.628614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.545 [2024-07-24 09:18:39.636454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.545 [2024-07-24 09:18:39.636489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.545 [2024-07-24 09:18:39.636508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.545 [2024-07-24 09:18:39.644258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.545 [2024-07-24 09:18:39.644288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.545 [2024-07-24 09:18:39.644305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.545 [2024-07-24 09:18:39.651998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.545 [2024-07-24 09:18:39.652032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.545 [2024-07-24 09:18:39.652051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.803 [2024-07-24 09:18:39.660194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.803 [2024-07-24 09:18:39.660230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.803 [2024-07-24 09:18:39.660250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.803 [2024-07-24 09:18:39.668121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.803 [2024-07-24 09:18:39.668172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.668191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.676109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.676145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.676178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.684028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.684062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.684082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.691828] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.691862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.691887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.699723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.699758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.699778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.708985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.709021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.718660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.718697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.718716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.727893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.727928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.727947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.737358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.737390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.737413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.745153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.745198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.745215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.753050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.753084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.753110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.761078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.761121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.761142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.768852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.768891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.768912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.776655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.776690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.776709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.784453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.784501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.784520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.792331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.792361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.792379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.800095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.800136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.800170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.807993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.808027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.808046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.815948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.815982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.816001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.824048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.824082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.824109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.831877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.831910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.831935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.839621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.839669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.839688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.847598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.847633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.804 [2024-07-24 09:18:39.847651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.804 [2024-07-24 09:18:39.855418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.804 [2024-07-24 09:18:39.855452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.855471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.863527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.863561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.863581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.871601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.871636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.871655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.879618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.879652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.879671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.887608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.887642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.887660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.895552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.895582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.895616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.903597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.903636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.903656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:01.805 [2024-07-24 09:18:39.911384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:01.805 [2024-07-24 09:18:39.911417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:01.805 [2024-07-24 09:18:39.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.064 [2024-07-24 09:18:39.919360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.064 [2024-07-24 09:18:39.919418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.064 [2024-07-24 09:18:39.919452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.064 [2024-07-24 09:18:39.927349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.064 [2024-07-24 09:18:39.927383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.064 [2024-07-24 09:18:39.927417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.064 [2024-07-24 09:18:39.935255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.064 [2024-07-24 09:18:39.935285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.935302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.942948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.942983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.943003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.950825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.950859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.950878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.958709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.958743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.958762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.966793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.966827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.966846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.974751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.974785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.974803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.982672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.982706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.982725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.990986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.991022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.991041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:39.998822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:39.998856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:39.998875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.006357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.006409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.006426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.014375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.014425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.014445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.022863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.022925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.022951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.031135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.031188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.031206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.038763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.038798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.038826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.046645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.046680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.046698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.054717] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.054754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.054774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.062599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.062630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.062647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.070341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.070372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.070388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.078338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.078367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.078401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.086354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.086401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.086419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.094430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.094466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.094485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.102253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.102284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.102302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.110298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.110341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.110359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.118260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.118290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.118308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.126388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.065 [2024-07-24 09:18:40.126419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.065 [2024-07-24 09:18:40.126450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.065 [2024-07-24 09:18:40.134530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.066 [2024-07-24 09:18:40.134565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.066 [2024-07-24 09:18:40.134584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.066 [2024-07-24 09:18:40.142417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.066 [2024-07-24 09:18:40.142463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.066 [2024-07-24 09:18:40.142482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.066 [2024-07-24 09:18:40.150703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.066 [2024-07-24 09:18:40.150737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.066 [2024-07-24 09:18:40.150756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.066 [2024-07-24 09:18:40.158714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.066 [2024-07-24 09:18:40.158748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.066 [2024-07-24 09:18:40.158767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.066 [2024-07-24 09:18:40.166565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.066 [2024-07-24 09:18:40.166596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.066 [2024-07-24 09:18:40.166613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.066 [2024-07-24 09:18:40.174470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.066 [2024-07-24 09:18:40.174513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.066 [2024-07-24 09:18:40.174533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.338 [2024-07-24 09:18:40.182334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.338 [2024-07-24 09:18:40.182378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.338 [2024-07-24 09:18:40.182411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.338 [2024-07-24 09:18:40.190217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.338 [2024-07-24 09:18:40.190249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.338 [2024-07-24 09:18:40.190266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.338 [2024-07-24 09:18:40.198072] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.338 [2024-07-24 09:18:40.198116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.338 [2024-07-24 09:18:40.198137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.338 [2024-07-24 09:18:40.206155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.338 [2024-07-24 09:18:40.206200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.338 [2024-07-24 09:18:40.206217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.338 [2024-07-24 09:18:40.214050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.340 [2024-07-24 09:18:40.214084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.340 [2024-07-24 09:18:40.214113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.340 [2024-07-24 09:18:40.222178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.340 [2024-07-24 09:18:40.222207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.341 [2024-07-24 09:18:40.222228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.341 [2024-07-24 09:18:40.230012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.341 [2024-07-24 09:18:40.230055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.230075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.238226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.238258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.238274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.246338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.246372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.246398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.254142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.254187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.254203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.262012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.262045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.262063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.270206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.270235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.270252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.278287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.278315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.278331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.286199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.286228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.286244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.294035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.294067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.294086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.302185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.302215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.302232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.310170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.342 [2024-07-24 09:18:40.310198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.342 [2024-07-24 09:18:40.310214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.342 [2024-07-24 09:18:40.318121] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.318167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.318183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.326090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.326131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.326164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.333959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.333991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.334010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.342048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.342081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.342100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.350688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.350721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.350740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.360488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.360522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.360541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.369738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.369784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.369801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.379838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.343 [2024-07-24 09:18:40.379872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.343 [2024-07-24 09:18:40.379892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.343 [2024-07-24 09:18:40.390150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.390180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.390203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.344 [2024-07-24 09:18:40.398962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.398993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.399010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.344 [2024-07-24 09:18:40.407999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.408033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.408052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.344 [2024-07-24 09:18:40.417932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.417967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.417986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.344 [2024-07-24 09:18:40.426372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.426402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.426418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.344 [2024-07-24 09:18:40.435572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.435607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.435626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.344 [2024-07-24 09:18:40.444841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.344 [2024-07-24 09:18:40.444872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.344 [2024-07-24 09:18:40.444902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.454289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.454320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.454353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.463281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.463328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.463346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.471219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.471261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.471281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.479199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.479229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.479245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.487206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.487234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.487249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.495093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.495149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.495167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.502947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.502980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.502999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.510692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.510725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.510744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.518650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.602 [2024-07-24 09:18:40.518683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.602 [2024-07-24 09:18:40.518701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.602 [2024-07-24 09:18:40.526766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.526799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.526817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.534860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.534893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.534911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.543115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.543147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.543178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.551066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.551099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.551127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.558947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.558980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.558998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.566720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.566765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.566781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.574631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.574663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.574681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.582539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.582572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.582590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.590449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.590495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.590513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.598249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.598277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.598293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.606138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.606183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.606204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.613982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.614028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.614044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.621895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.621929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.621947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.629917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.629950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.629968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.637979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.638012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.638030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.645987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.646020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.646038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.653784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.653816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.653834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.661810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.661843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.661861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.669779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.669812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.669830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.677816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.677864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.677884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.685932] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.685965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.685983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.693941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.693973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.693991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.701990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.702023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.702042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.603 [2024-07-24 09:18:40.710125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.603 [2024-07-24 09:18:40.710158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.603 [2024-07-24 09:18:40.710189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.718467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.718515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.718540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.726617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.726674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.734603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.734638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.734657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.742471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.742505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.742523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.750484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.750519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.750537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.758347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.758391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.758407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.766201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.766235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.766253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.774348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.774377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.774393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.782267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.782296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.782312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.790558] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.790593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.790611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.798620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.798654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.798672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.806623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.806651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.806682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.814403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.814437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.814462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.822438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.822484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.822503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.830432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.830465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.830484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.838435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.838482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.838501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.846350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.846380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.846415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.854467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.854501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.854519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.862555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.862589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.862 [2024-07-24 09:18:40.862608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.862 [2024-07-24 09:18:40.870393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.862 [2024-07-24 09:18:40.870437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.870456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.878453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.878487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.878505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.886522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.886555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.886574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.894476] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.894509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.894527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.902375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.902405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.902422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.910500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.910534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.910553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.918746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.918780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.918799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.926748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.926781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.926800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.934709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.934742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.934761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.942767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.942800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.942818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.950919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.950952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.950976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.958953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.958985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.959004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.966845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.966878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.966896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:02.863 [2024-07-24 09:18:40.974691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:02.863 [2024-07-24 09:18:40.974738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:02.863 [2024-07-24 09:18:40.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:40.982774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:40.982812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:40.982831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:40.992418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:40.992465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:40.992484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.002218] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.002249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.002265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.011665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.011701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.011721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.019990] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.020024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.020043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.027600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.027650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.027667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.035418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.035445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.035460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.044718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.044750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.044767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.054504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.054551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.054567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.063604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.063639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.063659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.072991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.073026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.073045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.080725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.080769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.080786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.088488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.088532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.088548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.096359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.096387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.096418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.104247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.104287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.104304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.112078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.112112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.112130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.120069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.120109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.120129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.127974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.128007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.128026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.136037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.136070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.136088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.144151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.144196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.144212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.151674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.151708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.151726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.122 [2024-07-24 09:18:41.159985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.122 [2024-07-24 09:18:41.160018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.122 [2024-07-24 09:18:41.160037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.168215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.168244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.168270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.176479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.176514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.176533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.184615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.184648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.184666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.192594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.192627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.192646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.200624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.200658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.200676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.208410] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.208453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.208472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.216635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.216669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.216688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.224539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.224573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.224591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:03.123 [2024-07-24 09:18:41.232687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xe4b390) 00:33:03.123 [2024-07-24 09:18:41.232717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:03.123 [2024-07-24 09:18:41.232750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:03.123 00:33:03.123 Latency(us) 00:33:03.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.123 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:03.123 nvme0n1 : 2.00 3824.44 478.06 0.00 0.00 4177.95 1395.67 13786.83 00:33:03.123 =================================================================================================================== 00:33:03.123 Total : 3824.44 478.06 0.00 0.00 4177.95 1395.67 13786.83 00:33:03.381 0 00:33:03.381 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:03.381 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:03.381 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:03.381 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:03.381 | .driver_specific 00:33:03.381 | .nvme_error 00:33:03.381 | .status_code 00:33:03.381 | .command_transient_transport_error' 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 247 > 0 )) 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3917763 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3917763 ']' 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3917763 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917763 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917763' 00:33:03.654 killing process with pid 3917763 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3917763 00:33:03.654 Received shutdown signal, test time was about 2.000000 seconds 00:33:03.654 00:33:03.654 Latency(us) 00:33:03.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.654 =================================================================================================================== 00:33:03.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3917763 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3918163 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3918163 /var/tmp/bperf.sock 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3918163 ']' 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:03.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.654 09:18:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:03.913 [2024-07-24 09:18:41.810554] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:03.913 [2024-07-24 09:18:41.810634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918163 ] 00:33:03.913 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.913 [2024-07-24 09:18:41.840934] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:03.913 [2024-07-24 09:18:41.871968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.913 [2024-07-24 09:18:41.957581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.171 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.171 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:04.171 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.171 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:04.429 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:04.429 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.429 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.429 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.429 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.429 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.687 nvme0n1 00:33:04.687 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:04.687 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.687 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:04.687 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.687 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:04.687 09:18:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:04.946 Running I/O for 2 seconds... 00:33:04.946 [2024-07-24 09:18:42.863852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ed920 00:33:04.946 [2024-07-24 09:18:42.864981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.865023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.875939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f9f68 00:33:04.946 [2024-07-24 09:18:42.877017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.877050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.889316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e38d0 00:33:04.946 [2024-07-24 09:18:42.890580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.890613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.902586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e4de8 00:33:04.946 [2024-07-24 09:18:42.903991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.904024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.915874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e7818 00:33:04.946 [2024-07-24 09:18:42.917486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.917518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.929185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f9f68 00:33:04.946 [2024-07-24 09:18:42.930926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:4504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.930959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.942386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f0788 00:33:04.946 [2024-07-24 09:18:42.944360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:23525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.944389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.955703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ed0b0 00:33:04.946 [2024-07-24 09:18:42.957818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.957849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.964764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190fc128 00:33:04.946 [2024-07-24 09:18:42.965690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.965722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.976831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ec408 00:33:04.946 [2024-07-24 09:18:42.977754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.977786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:04.946 [2024-07-24 09:18:42.990122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f92c0 00:33:04.946 [2024-07-24 09:18:42.991207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.946 [2024-07-24 09:18:42.991235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:04.947 [2024-07-24 09:18:43.003352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ebfd0 00:33:04.947 [2024-07-24 09:18:43.004575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.947 [2024-07-24 09:18:43.004606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:04.947 [2024-07-24 09:18:43.016646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e4140 00:33:04.947 [2024-07-24 09:18:43.018056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.947 [2024-07-24 09:18:43.018088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:04.947 [2024-07-24 09:18:43.028418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e0630 00:33:04.947 [2024-07-24 09:18:43.029401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.947 [2024-07-24 09:18:43.029429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:04.947 [2024-07-24 09:18:43.040037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190fef90 00:33:04.947 [2024-07-24 09:18:43.040923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.947 [2024-07-24 09:18:43.040952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:04.947 [2024-07-24 09:18:43.054139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190fd208 00:33:04.947 [2024-07-24 09:18:43.055279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:04.947 [2024-07-24 09:18:43.055307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.067355] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f3a28 00:33:05.206 [2024-07-24 09:18:43.068608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.068643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.079294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f92c0 00:33:05.206 [2024-07-24 09:18:43.080543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.080581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.092601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f9f68 00:33:05.206 [2024-07-24 09:18:43.094001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.094033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.104462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f7100 00:33:05.206 [2024-07-24 09:18:43.105368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:23168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.105397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.116915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ec408 00:33:05.206 [2024-07-24 09:18:43.117816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.117849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.129430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f2d80 00:33:05.206 [2024-07-24 09:18:43.130430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.130473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.142097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e1b48 00:33:05.206 [2024-07-24 09:18:43.142993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.143023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.154765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190fb048 00:33:05.206 [2024-07-24 09:18:43.155657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.155688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:05.206 [2024-07-24 09:18:43.166805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e88f8 00:33:05.206 [2024-07-24 09:18:43.167705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.206 [2024-07-24 09:18:43.167736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.179803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e0ea0 00:33:05.207 [2024-07-24 09:18:43.180865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.180897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.193944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ff3c8 00:33:05.207 [2024-07-24 09:18:43.195283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.195312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.206981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ddc00 00:33:05.207 [2024-07-24 09:18:43.208470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.208501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.218962] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e5658 00:33:05.207 [2024-07-24 09:18:43.220373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.220402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.232130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ef270 00:33:05.207 [2024-07-24 09:18:43.233682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:6483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.233714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.243971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f9b30 00:33:05.207 [2024-07-24 09:18:43.245035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.245067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.256734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190df118 00:33:05.207 [2024-07-24 09:18:43.257660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.257691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.269633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ee190 00:33:05.207 [2024-07-24 09:18:43.270850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.270881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.281314] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e0ea0 00:33:05.207 [2024-07-24 09:18:43.282539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:17505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.282570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.294594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e12d8 00:33:05.207 [2024-07-24 09:18:43.295966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:20545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.295997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.306402] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e3498 00:33:05.207 [2024-07-24 09:18:43.307358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.307386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.207 [2024-07-24 09:18:43.319256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190efae0 00:33:05.207 [2024-07-24 09:18:43.320043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.207 [2024-07-24 09:18:43.320077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.333831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e1710 00:33:05.465 [2024-07-24 09:18:43.335588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.335621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.345704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190de470 00:33:05.465 [2024-07-24 09:18:43.346922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:12555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.346955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.358446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190df118 00:33:05.465 [2024-07-24 09:18:43.359536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.359568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.370387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e5658 00:33:05.465 [2024-07-24 09:18:43.372271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.372300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.381284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f6458 00:33:05.465 [2024-07-24 09:18:43.382190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.382218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.394499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e6b70 00:33:05.465 [2024-07-24 09:18:43.395528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.395559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.407747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f5be8 00:33:05.465 [2024-07-24 09:18:43.408990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.409027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.420996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f6020 00:33:05.465 [2024-07-24 09:18:43.422448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.422480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.432917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e73e0 00:33:05.465 [2024-07-24 09:18:43.433804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.433835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.445670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e0630 00:33:05.465 [2024-07-24 09:18:43.446411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.446456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.458884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190ea680 00:33:05.465 [2024-07-24 09:18:43.459788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.459818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.472223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f1430 00:33:05.465 [2024-07-24 09:18:43.473349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.473377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.486758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190f31b8 00:33:05.465 [2024-07-24 09:18:43.488882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.488913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.495780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e12d8 00:33:05.465 [2024-07-24 09:18:43.496686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.496717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.507831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e5220 00:33:05.465 [2024-07-24 09:18:43.508733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.508763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.522559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.465 [2024-07-24 09:18:43.523116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.523163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.536615] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.465 [2024-07-24 09:18:43.536948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.536975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.550598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.465 [2024-07-24 09:18:43.550863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.550893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.564705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.465 [2024-07-24 09:18:43.564971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.565001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.465 [2024-07-24 09:18:43.579034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.465 [2024-07-24 09:18:43.579284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.465 [2024-07-24 09:18:43.579314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.593230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.593509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.593543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.607525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.607760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.607791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.621711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.621986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.622017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.635896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.636184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.636212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.650072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.650366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:25259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.650394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.664350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.664613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.664644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.678452] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.678716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.678748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.692684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.692948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.692979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.706812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.707083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.707122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.720892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.721169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.721197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.735211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.735495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.735525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.749312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.749582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.749612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.763462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.763727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.763763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.777560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.777836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.777867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.791644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.791910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.791940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.805780] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.806046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.806077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.819998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.820297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.820325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.724 [2024-07-24 09:18:43.834189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.724 [2024-07-24 09:18:43.834510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.724 [2024-07-24 09:18:43.834541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.848228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.848543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.848577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.862298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.862576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.862608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.876471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.876745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.876774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.890364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.890646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.890677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.904576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.904841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.904871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.918842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.919119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.919166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.932876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.933142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.933173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.947119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.947464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.947494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.961361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.961651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.961681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.975651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.975914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.983 [2024-07-24 09:18:43.975944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.983 [2024-07-24 09:18:43.989929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.983 [2024-07-24 09:18:43.990206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:43.990234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.004079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.004450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.004481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.018336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.018611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.018642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.032587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.032851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.032882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.046765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.047034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.047064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.060922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.061199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.061226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.075111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.075410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.075440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:05.984 [2024-07-24 09:18:44.089304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:05.984 [2024-07-24 09:18:44.089576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:05.984 [2024-07-24 09:18:44.089606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.103315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.103584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.103618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.117504] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.117778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.117809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.131641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.131915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.131952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.145710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.145977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.146008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.159909] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.160189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.160217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.173944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.174396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.174422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.188233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.188520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.188551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.243 [2024-07-24 09:18:44.202293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.243 [2024-07-24 09:18:44.202571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.243 [2024-07-24 09:18:44.202602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.216538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.216801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.216831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.230709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.230972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:16483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.231003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.244906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.245187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.245215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.259039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.259352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.259380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.273228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.273505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.273535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.287397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.287673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.287703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.301520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.301784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.301814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.315792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.316068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.316098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.329993] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.330304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:24798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.330332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.344261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.344527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.344557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.244 [2024-07-24 09:18:44.358379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.244 [2024-07-24 09:18:44.358674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.244 [2024-07-24 09:18:44.358719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.372633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.372896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.372929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.386850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.387124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.387170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.400799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.401064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.401094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.414803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.415067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.415098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.428841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.429124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.429167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.442958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.443300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.443328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.457125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.457479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.457509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.471298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.471578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.471609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.485456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.485721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.485750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.499466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.499732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.499763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.513653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.513917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.513947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.527781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.528049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.528079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.541887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.542166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.542194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.556059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.556358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.556385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.570203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.570492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.570522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.584464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.584724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.584754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.598676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.503 [2024-07-24 09:18:44.598938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.503 [2024-07-24 09:18:44.598967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.503 [2024-07-24 09:18:44.612891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.504 [2024-07-24 09:18:44.613166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.504 [2024-07-24 09:18:44.613194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.763 [2024-07-24 09:18:44.626971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.763 [2024-07-24 09:18:44.627276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.763 [2024-07-24 09:18:44.627312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.763 [2024-07-24 09:18:44.641213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.763 [2024-07-24 09:18:44.641513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.763 [2024-07-24 09:18:44.641545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.763 [2024-07-24 09:18:44.655468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.763 [2024-07-24 09:18:44.655745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.655776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.669710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.669972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.670002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.683860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.684125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:17558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.684170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.698050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.698346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.698375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.712329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.712606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.712636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.726566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.726832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.726870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.740688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.740960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.740990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.754494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.754765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.754796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.768304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.768587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.768617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.782187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.782447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.782481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.796089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.796523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:18809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.796553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.810210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.810478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.810508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.824298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.824598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.838370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.838656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.838686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 [2024-07-24 09:18:44.852517] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10be940) with pdu=0x2000190e23b8 00:33:06.764 [2024-07-24 09:18:44.852793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:06.764 [2024-07-24 09:18:44.852823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:06.764 00:33:06.764 Latency(us) 00:33:06.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:06.764 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.764 nvme0n1 : 2.01 18693.73 73.02 0.00 0.00 6830.51 3495.25 16990.81 00:33:06.764 =================================================================================================================== 00:33:06.764 Total : 18693.73 73.02 0.00 0.00 6830.51 3495.25 16990.81 00:33:06.764 0 00:33:06.764 09:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:06.764 09:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:06.764 09:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:06.764 09:18:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:06.764 | .driver_specific 00:33:06.764 | .nvme_error 00:33:06.764 | .status_code 00:33:06.764 | .command_transient_transport_error' 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3918163 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3918163 ']' 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3918163 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3918163 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3918163' 00:33:07.023 killing process with pid 3918163 00:33:07.023 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3918163 00:33:07.023 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.023 00:33:07.023 Latency(us) 00:33:07.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.023 =================================================================================================================== 00:33:07.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3918163 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3918572 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3918572 /var/tmp/bperf.sock 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3918572 ']' 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.282 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:07.282 [2024-07-24 09:18:45.388915] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:07.282 [2024-07-24 09:18:45.388996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3918572 ] 00:33:07.282 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:07.282 Zero copy mechanism will not be used. 00:33:07.540 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.540 [2024-07-24 09:18:45.419878] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:07.540 [2024-07-24 09:18:45.447151] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.540 [2024-07-24 09:18:45.533893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.540 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:07.540 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:07.540 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:07.540 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:07.798 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:07.798 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.798 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:07.798 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.798 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:07.798 09:18:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.383 nvme0n1 00:33:08.383 09:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:08.383 09:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.383 09:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:08.383 09:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.383 09:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:08.383 09:18:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:08.383 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.383 Zero copy mechanism will not be used. 00:33:08.383 Running I/O for 2 seconds... 00:33:08.383 [2024-07-24 09:18:46.440654] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.383 [2024-07-24 09:18:46.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.383 [2024-07-24 09:18:46.441120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.383 [2024-07-24 09:18:46.450783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.383 [2024-07-24 09:18:46.451171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.383 [2024-07-24 09:18:46.451218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.383 [2024-07-24 09:18:46.459958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.383 [2024-07-24 09:18:46.460330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.383 [2024-07-24 09:18:46.460381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.383 [2024-07-24 09:18:46.469903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.383 [2024-07-24 09:18:46.470293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.383 [2024-07-24 09:18:46.470324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.383 [2024-07-24 09:18:46.478802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.383 [2024-07-24 09:18:46.479282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.383 [2024-07-24 09:18:46.479324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.488377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.488763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.488800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.497610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.497990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.498023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.506511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.506895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.506929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.516071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.516475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.516508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.525493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.525873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.525905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.535964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.536315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.536344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.546304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.546638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.546667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.555004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.555336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.555364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.565382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.565726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.565770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.575887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.576241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.576270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.584964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.585301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.585331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.592917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.593284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.593313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.601784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.602157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.644 [2024-07-24 09:18:46.602191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.644 [2024-07-24 09:18:46.609901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.644 [2024-07-24 09:18:46.610277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.610306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.617863] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.618237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.618266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.626065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.626419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.626447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.634649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.635030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.635058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.642357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.642509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.642536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.650931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.651298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.651326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.659004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.659202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.659230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.667321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.667658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.667685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.675522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.675867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.675896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.683873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.684249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.684278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.692463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.692789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.692817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.700085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.700432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.700462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.707876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.708059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.708099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.716754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.717134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.717165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.725743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.726074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.726111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.735356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.735702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.735753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.745041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.745388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.745417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.645 [2024-07-24 09:18:46.755281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.645 [2024-07-24 09:18:46.755647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.645 [2024-07-24 09:18:46.755678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.763814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.764186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.764218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.771965] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.772301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.772331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.780063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.780401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.780430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.789344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.789694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.789722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.797344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.797685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.797713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.805344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.805675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.805703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.813691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.814027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.814054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.821374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.821711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.821745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.829550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.829895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.829923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.838027] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.838365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.838394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.845970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.846310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.846339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.854612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.904 [2024-07-24 09:18:46.854948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.904 [2024-07-24 09:18:46.854976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.904 [2024-07-24 09:18:46.862952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.863290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.863318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.871258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.871626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.871652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.879963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.880314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.880342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.889156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.889516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.889558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.897158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.897277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.897305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.905671] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.906030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.906058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.914183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.914514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.914550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.922620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.922976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.923017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.931385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.931735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.931767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.939543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.939656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.939684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.948370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.948710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.948739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.958081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.958450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.958492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.967768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.968146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.968181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.977020] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.977366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.977395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.986332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.986691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.986733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:46.995973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:46.996310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:46.996339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:47.005669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:47.006012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:47.006041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:08.905 [2024-07-24 09:18:47.014888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:08.905 [2024-07-24 09:18:47.015241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:08.905 [2024-07-24 09:18:47.015279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.164 [2024-07-24 09:18:47.024045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.164 [2024-07-24 09:18:47.024424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.164 [2024-07-24 09:18:47.024455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.164 [2024-07-24 09:18:47.033753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.164 [2024-07-24 09:18:47.034099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.164 [2024-07-24 09:18:47.034135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.164 [2024-07-24 09:18:47.042972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.164 [2024-07-24 09:18:47.043158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.164 [2024-07-24 09:18:47.043189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.164 [2024-07-24 09:18:47.052521] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.164 [2024-07-24 09:18:47.052643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.164 [2024-07-24 09:18:47.052672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.164 [2024-07-24 09:18:47.062198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.164 [2024-07-24 09:18:47.062542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.164 [2024-07-24 09:18:47.062569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.164 [2024-07-24 09:18:47.071204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.164 [2024-07-24 09:18:47.071364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.071391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.080651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.080983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.081010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.089021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.089351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.089380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.097983] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.098325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.098353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.106820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.107196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.107225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.115558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.115910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.115937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.124908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.125272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.125300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.134094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.134446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.134474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.142440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.142790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.142818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.151403] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.151779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.151807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.159698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.160028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.160057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.168414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.168785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.168828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.177412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.177760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.177787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.186112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.186454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.186483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.194848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.195223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.195251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.203176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.203505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.203554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.211728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.212070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.212099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.219569] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.219704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.219731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.227616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.227986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.228029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.236016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.236348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.236377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.244427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.244777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.244805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.252509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.252837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.252865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.260933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.261297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.261325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.269805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.270140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.270169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.165 [2024-07-24 09:18:47.277662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.165 [2024-07-24 09:18:47.278098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.165 [2024-07-24 09:18:47.278140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.424 [2024-07-24 09:18:47.286623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.424 [2024-07-24 09:18:47.286961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.424 [2024-07-24 09:18:47.286990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.424 [2024-07-24 09:18:47.294898] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.295284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.295313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.302332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.302449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.302477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.311129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.311486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.311528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.319111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.319451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.319478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.327141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.327453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.327481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.333847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.334172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.334202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.340918] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.341318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.348894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.349246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.349274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.356391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.356862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.356890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.365175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.365588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.365617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.373063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.373443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.373472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.380188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.380500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.380543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.387226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.387589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.387616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.394877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.395288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.395316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.402291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.402641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.402668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.409949] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.410346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.410381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.416894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.417212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.417241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.423883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.424207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.424235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.431457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.431788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.431815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.439085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.439408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.439436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.446336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.446799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.446826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.454225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.454537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.454565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.461694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.462072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.462100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.469496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.469854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.425 [2024-07-24 09:18:47.469881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.425 [2024-07-24 09:18:47.476951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.425 [2024-07-24 09:18:47.477346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.477374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.484631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.484945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.484973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.492190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.492514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.492541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.500170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.500527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.500554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.508459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.508786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.508814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.516396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.516836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.516864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.523704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.524019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.524046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.532155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.426 [2024-07-24 09:18:47.532593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.426 [2024-07-24 09:18:47.532619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.426 [2024-07-24 09:18:47.539872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.540218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.540256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.547995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.548413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.548459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.555933] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.556299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.556328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.564150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.564460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.564503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.571346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.571732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.571761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.579274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.579648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.579675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.586401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.586710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.586739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.593589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.594018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.594045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.685 [2024-07-24 09:18:47.601496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.685 [2024-07-24 09:18:47.601821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.685 [2024-07-24 09:18:47.601849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.608887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.609211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.609240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.616195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.616554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.616581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.623484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.623844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.623871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.631033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.631439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.631467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.638337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.638646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.638689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.646032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.646344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.646372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.654118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.654430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.654473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.662544] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.662899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.662927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.670463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.670775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.670803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.677929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.678250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.678279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.685528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.685896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.685923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.693254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.693510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.693537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.700819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.701151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.701181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.708202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.708497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.708526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.715456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.715770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.715798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.722839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.723179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.723207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.729976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.730318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.730346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.737524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.737824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.737862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.744752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.745109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.745138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.751966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.752305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.752334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.759258] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.759561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.759589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.766249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.766634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.766663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.774110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.774432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.774462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.686 [2024-07-24 09:18:47.781754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.686 [2024-07-24 09:18:47.782128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.686 [2024-07-24 09:18:47.782157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.687 [2024-07-24 09:18:47.789268] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.687 [2024-07-24 09:18:47.789632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.687 [2024-07-24 09:18:47.789661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.687 [2024-07-24 09:18:47.796925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.687 [2024-07-24 09:18:47.797239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.687 [2024-07-24 09:18:47.797269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.804041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.804378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.804409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.810899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.811214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.811249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.818046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.818382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.818411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.824894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.825215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.825244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.832117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.832431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.832458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.840127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.840441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.840468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.848112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.848463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.848491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.855372] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.855736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.855764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.862584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.862940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.862968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.870086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.870419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.870448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.877219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.877562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.877589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.884737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.885057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.885100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.891605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.891900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.891928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.898821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.899182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.899211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.906368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.906720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.906747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.913966] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.914341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.921386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.921794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.921821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.928678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.928996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.929029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.946 [2024-07-24 09:18:47.935961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.946 [2024-07-24 09:18:47.936270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.946 [2024-07-24 09:18:47.936298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.943637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.943959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.944003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.950335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.950663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.950692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.957213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.957537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.957565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.964519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.964934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.964962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.971973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.972313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.972341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.979531] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.979843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.979870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.986463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.986752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.986778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:47.993226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:47.993539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:47.993567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.000619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.000987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.001015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.008539] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.008928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.008955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.015931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.016313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.016341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.023155] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.023470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.023497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.031201] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.031614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.031641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.039510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.039908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.039935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.048360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.048676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.048704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:09.947 [2024-07-24 09:18:48.056942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:09.947 [2024-07-24 09:18:48.057302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:09.947 [2024-07-24 09:18:48.057338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.065478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.065866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.065897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.074545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.074909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.074938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.083499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.083929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.083957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.092202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.092664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.092692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.101047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.101440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.101468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.109647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.109970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.109998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.118573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.118894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.118921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.126789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.127097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.127133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.134911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.135264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.135298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.143708] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.144116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.144145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.152454] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.152809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.152836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.161374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.161768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.161795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.169939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.170319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.170347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.178839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.179301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.179329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.187732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.188160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.188189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.196302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.196691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.196719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.204872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.205257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.205286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.213568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.213886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.213913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.222008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.222404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.222432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.230620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.230916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.230954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.239045] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.239447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.239475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.247832] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.248166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.248194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.256266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.256706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.256733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.264789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.265142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.265170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.206 [2024-07-24 09:18:48.273393] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.206 [2024-07-24 09:18:48.273766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.206 [2024-07-24 09:18:48.273793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.207 [2024-07-24 09:18:48.281753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.207 [2024-07-24 09:18:48.282146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.207 [2024-07-24 09:18:48.282184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.207 [2024-07-24 09:18:48.290071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.207 [2024-07-24 09:18:48.290479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.207 [2024-07-24 09:18:48.290507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.207 [2024-07-24 09:18:48.298264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.207 [2024-07-24 09:18:48.298654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.207 [2024-07-24 09:18:48.298681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.207 [2024-07-24 09:18:48.306442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.207 [2024-07-24 09:18:48.306808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.207 [2024-07-24 09:18:48.306835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.207 [2024-07-24 09:18:48.315230] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.207 [2024-07-24 09:18:48.315640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.207 [2024-07-24 09:18:48.315667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.465 [2024-07-24 09:18:48.323927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.465 [2024-07-24 09:18:48.324381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.465 [2024-07-24 09:18:48.324412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.465 [2024-07-24 09:18:48.332662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.465 [2024-07-24 09:18:48.332997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.465 [2024-07-24 09:18:48.333025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.465 [2024-07-24 09:18:48.341638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.465 [2024-07-24 09:18:48.341963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.465 [2024-07-24 09:18:48.341991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.465 [2024-07-24 09:18:48.350516] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.465 [2024-07-24 09:18:48.350928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.465 [2024-07-24 09:18:48.350957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.465 [2024-07-24 09:18:48.359013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.465 [2024-07-24 09:18:48.359490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.465 [2024-07-24 09:18:48.359519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.465 [2024-07-24 09:18:48.367799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.465 [2024-07-24 09:18:48.368218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.465 [2024-07-24 09:18:48.368246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.376376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.376796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.376823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.384957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.385395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.385423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.393565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.394010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.394038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.401940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.402336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.402365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.410626] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.410986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.411013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.419552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.419953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.419980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:10.466 [2024-07-24 09:18:48.427936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10c05c0) with pdu=0x2000190fef90 00:33:10.466 [2024-07-24 09:18:48.428276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:10.466 [2024-07-24 09:18:48.428302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:10.466 00:33:10.466 Latency(us) 00:33:10.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.466 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:10.466 nvme0n1 : 2.00 3743.78 467.97 0.00 0.00 4262.67 3082.62 15534.46 00:33:10.466 =================================================================================================================== 00:33:10.466 Total : 3743.78 467.97 0.00 0.00 4262.67 3082.62 15534.46 00:33:10.466 0 00:33:10.466 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:10.466 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:10.466 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:10.466 | .driver_specific 00:33:10.466 | .nvme_error 00:33:10.466 | .status_code 00:33:10.466 | .command_transient_transport_error' 00:33:10.466 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3918572 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3918572 ']' 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3918572 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3918572 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3918572' 00:33:10.724 killing process with pid 3918572 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3918572 00:33:10.724 Received shutdown signal, test time was about 2.000000 seconds 00:33:10.724 00:33:10.724 Latency(us) 00:33:10.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.724 =================================================================================================================== 00:33:10.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:10.724 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3918572 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3917216 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3917216 ']' 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3917216 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3917216 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3917216' 00:33:10.982 killing process with pid 3917216 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3917216 00:33:10.982 09:18:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3917216 00:33:11.241 00:33:11.241 real 0m15.081s 00:33:11.241 user 0m30.106s 00:33:11.241 sys 0m3.959s 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:11.241 ************************************ 00:33:11.241 END TEST nvmf_digest_error 00:33:11.241 ************************************ 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:11.241 rmmod nvme_tcp 00:33:11.241 rmmod nvme_fabrics 00:33:11.241 rmmod nvme_keyring 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3917216 ']' 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3917216 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3917216 ']' 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3917216 00:33:11.241 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3917216) - No such process 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3917216 is not found' 00:33:11.241 Process with pid 3917216 is not found 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:11.241 09:18:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:13.776 00:33:13.776 real 0m34.956s 00:33:13.776 user 1m1.819s 00:33:13.776 sys 0m9.591s 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:13.776 ************************************ 00:33:13.776 END TEST nvmf_digest 00:33:13.776 ************************************ 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.776 ************************************ 00:33:13.776 START TEST nvmf_bdevperf 00:33:13.776 ************************************ 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:13.776 * Looking for test storage... 00:33:13.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.776 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:13.777 09:18:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:15.154 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:15.154 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:15.154 Found net devices under 0000:09:00.0: cvl_0_0 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:15.154 Found net devices under 0000:09:00.1: cvl_0_1 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.154 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:15.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:33:15.412 00:33:15.412 --- 10.0.0.2 ping statistics --- 00:33:15.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.412 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:33:15.412 00:33:15.412 --- 10.0.0.1 ping statistics --- 00:33:15.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.412 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3920922 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3920922 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3920922 ']' 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:15.412 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.412 [2024-07-24 09:18:53.448445] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:15.412 [2024-07-24 09:18:53.448544] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.412 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.412 [2024-07-24 09:18:53.487293] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:15.412 [2024-07-24 09:18:53.519244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:15.670 [2024-07-24 09:18:53.613754] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.670 [2024-07-24 09:18:53.613815] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.670 [2024-07-24 09:18:53.613832] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.670 [2024-07-24 09:18:53.613846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.670 [2024-07-24 09:18:53.613858] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.670 [2024-07-24 09:18:53.613926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:15.670 [2024-07-24 09:18:53.614273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:15.670 [2024-07-24 09:18:53.614278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.670 [2024-07-24 09:18:53.753123] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.670 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.928 Malloc0 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:15.928 [2024-07-24 09:18:53.817641] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:15.928 { 00:33:15.928 "params": { 00:33:15.928 "name": "Nvme$subsystem", 00:33:15.928 "trtype": "$TEST_TRANSPORT", 00:33:15.928 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.928 "adrfam": "ipv4", 00:33:15.928 "trsvcid": "$NVMF_PORT", 00:33:15.928 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.928 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.928 "hdgst": ${hdgst:-false}, 00:33:15.928 "ddgst": ${ddgst:-false} 00:33:15.928 }, 00:33:15.928 "method": "bdev_nvme_attach_controller" 00:33:15.928 } 00:33:15.928 EOF 00:33:15.928 )") 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:15.928 09:18:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:15.928 "params": { 00:33:15.928 "name": "Nvme1", 00:33:15.928 "trtype": "tcp", 00:33:15.928 "traddr": "10.0.0.2", 00:33:15.928 "adrfam": "ipv4", 00:33:15.928 "trsvcid": "4420", 00:33:15.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.928 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.928 "hdgst": false, 00:33:15.928 "ddgst": false 00:33:15.928 }, 00:33:15.928 "method": "bdev_nvme_attach_controller" 00:33:15.928 }' 00:33:15.928 [2024-07-24 09:18:53.864640] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:15.928 [2024-07-24 09:18:53.864718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3920974 ] 00:33:15.928 EAL: No free 2048 kB hugepages reported on node 1 00:33:15.928 [2024-07-24 09:18:53.900135] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:15.928 [2024-07-24 09:18:53.927974] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.928 [2024-07-24 09:18:54.012305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:16.186 Running I/O for 1 seconds... 00:33:17.119 00:33:17.119 Latency(us) 00:33:17.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.119 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:17.119 Verification LBA range: start 0x0 length 0x4000 00:33:17.119 Nvme1n1 : 1.01 8716.59 34.05 0.00 0.00 14624.06 2912.71 13495.56 00:33:17.119 =================================================================================================================== 00:33:17.119 Total : 8716.59 34.05 0.00 0.00 14624.06 2912.71 13495.56 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3921208 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:17.377 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:17.377 { 00:33:17.378 "params": { 00:33:17.378 "name": "Nvme$subsystem", 00:33:17.378 "trtype": "$TEST_TRANSPORT", 00:33:17.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:17.378 "adrfam": "ipv4", 00:33:17.378 "trsvcid": "$NVMF_PORT", 00:33:17.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:17.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:17.378 "hdgst": ${hdgst:-false}, 00:33:17.378 "ddgst": ${ddgst:-false} 00:33:17.378 }, 00:33:17.378 "method": "bdev_nvme_attach_controller" 00:33:17.378 } 00:33:17.378 EOF 00:33:17.378 )") 00:33:17.378 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:17.378 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:17.378 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:17.378 09:18:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:17.378 "params": { 00:33:17.378 "name": "Nvme1", 00:33:17.378 "trtype": "tcp", 00:33:17.378 "traddr": "10.0.0.2", 00:33:17.378 "adrfam": "ipv4", 00:33:17.378 "trsvcid": "4420", 00:33:17.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:17.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:17.378 "hdgst": false, 00:33:17.378 "ddgst": false 00:33:17.378 }, 00:33:17.378 "method": "bdev_nvme_attach_controller" 00:33:17.378 }' 00:33:17.378 [2024-07-24 09:18:55.452519] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:17.378 [2024-07-24 09:18:55.452601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3921208 ] 00:33:17.378 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.378 [2024-07-24 09:18:55.484863] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:17.636 [2024-07-24 09:18:55.513503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.636 [2024-07-24 09:18:55.595127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.894 Running I/O for 15 seconds... 00:33:20.423 09:18:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3920922 00:33:20.423 09:18:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:20.423 [2024-07-24 09:18:58.423095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.423972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.423989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.424289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.424320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:46808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:46872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:46904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.424979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.424995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.425028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.425060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.424 [2024-07-24 09:18:58.425099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.424 [2024-07-24 09:18:58.425427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.424 [2024-07-24 09:18:58.425442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.425676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.425977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.425995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:20.425 [2024-07-24 09:18:58.426507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.426974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.426992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.425 [2024-07-24 09:18:58.427589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232ae60 is same with the state(5) to be set 00:33:20.425 [2024-07-24 09:18:58.427624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:20.425 [2024-07-24 09:18:58.427637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:20.425 [2024-07-24 09:18:58.427649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46552 len:8 PRP1 0x0 PRP2 0x0 00:33:20.425 [2024-07-24 09:18:58.427663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:20.425 [2024-07-24 09:18:58.427728] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x232ae60 was disconnected and freed. reset controller. 00:33:20.425 [2024-07-24 09:18:58.431627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.425 [2024-07-24 09:18:58.431707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.425 [2024-07-24 09:18:58.432347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.425 [2024-07-24 09:18:58.432377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.425 [2024-07-24 09:18:58.432417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.425 [2024-07-24 09:18:58.432658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.425 [2024-07-24 09:18:58.432911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.425 [2024-07-24 09:18:58.432941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.425 [2024-07-24 09:18:58.432960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.425 [2024-07-24 09:18:58.436568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.425 [2024-07-24 09:18:58.445856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.425 [2024-07-24 09:18:58.446312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.425 [2024-07-24 09:18:58.446355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.425 [2024-07-24 09:18:58.446372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.425 [2024-07-24 09:18:58.446624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.425 [2024-07-24 09:18:58.446866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.425 [2024-07-24 09:18:58.446890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.425 [2024-07-24 09:18:58.446905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.450458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.426 [2024-07-24 09:18:58.459762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.426 [2024-07-24 09:18:58.460190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.426 [2024-07-24 09:18:58.460219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.426 [2024-07-24 09:18:58.460236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.426 [2024-07-24 09:18:58.460474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.426 [2024-07-24 09:18:58.460716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.426 [2024-07-24 09:18:58.460740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.426 [2024-07-24 09:18:58.460755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.464338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.426 [2024-07-24 09:18:58.473591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.426 [2024-07-24 09:18:58.474013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.426 [2024-07-24 09:18:58.474044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.426 [2024-07-24 09:18:58.474062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.426 [2024-07-24 09:18:58.474309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.426 [2024-07-24 09:18:58.474551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.426 [2024-07-24 09:18:58.474575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.426 [2024-07-24 09:18:58.474590] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.478163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.426 [2024-07-24 09:18:58.487429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.426 [2024-07-24 09:18:58.487861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.426 [2024-07-24 09:18:58.487893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.426 [2024-07-24 09:18:58.487911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.426 [2024-07-24 09:18:58.488162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.426 [2024-07-24 09:18:58.488411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.426 [2024-07-24 09:18:58.488435] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.426 [2024-07-24 09:18:58.488451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.492012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.426 [2024-07-24 09:18:58.501270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.426 [2024-07-24 09:18:58.501676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.426 [2024-07-24 09:18:58.501718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.426 [2024-07-24 09:18:58.501734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.426 [2024-07-24 09:18:58.501991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.426 [2024-07-24 09:18:58.502246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.426 [2024-07-24 09:18:58.502270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.426 [2024-07-24 09:18:58.502286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.505848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.426 [2024-07-24 09:18:58.515108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.426 [2024-07-24 09:18:58.515481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.426 [2024-07-24 09:18:58.515512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.426 [2024-07-24 09:18:58.515530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.426 [2024-07-24 09:18:58.515768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.426 [2024-07-24 09:18:58.516010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.426 [2024-07-24 09:18:58.516034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.426 [2024-07-24 09:18:58.516049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.519619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.426 [2024-07-24 09:18:58.529079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.426 [2024-07-24 09:18:58.529495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.426 [2024-07-24 09:18:58.529526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.426 [2024-07-24 09:18:58.529544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.426 [2024-07-24 09:18:58.529789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.426 [2024-07-24 09:18:58.530031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.426 [2024-07-24 09:18:58.530055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.426 [2024-07-24 09:18:58.530070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.426 [2024-07-24 09:18:58.533640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.543109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.543529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.543557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.543573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.543819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.544062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.544086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.544112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.547675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.556931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.557338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.557369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.557387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.557625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.557867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.557890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.557905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.561478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.570939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.571343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.571375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.571393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.571631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.571874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.571897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.571918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.575493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.584968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.585428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.585460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.585478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.585717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.585959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.585982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.585997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.589573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.598833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.599231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.599263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.599280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.599519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.599761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.599785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.599800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.603376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.612836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.613240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.613271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.613288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.613526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.613769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.613792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.613808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.617381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.626839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.627279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.627310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.627328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.627566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.627807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.627831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.627846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.685 [2024-07-24 09:18:58.631425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.685 [2024-07-24 09:18:58.640702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.685 [2024-07-24 09:18:58.641097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.685 [2024-07-24 09:18:58.641135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.685 [2024-07-24 09:18:58.641153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.685 [2024-07-24 09:18:58.641391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.685 [2024-07-24 09:18:58.641634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.685 [2024-07-24 09:18:58.641657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.685 [2024-07-24 09:18:58.641672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.645242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.654706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.655173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.655205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.655223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.655461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.655704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.655727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.655742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.659316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.668578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.669009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.669040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.669057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.669306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.669555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.669579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.669594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.673163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.682412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.682839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.682870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.682887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.683138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.683380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.683404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.683420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.686996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.696274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.696669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.696700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.696718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.696956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.697208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.697233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.697248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.700812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.710160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.710583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.710615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.710633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.710871] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.711123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.711148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.711163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.714735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.723996] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.724421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.724453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.724471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.724709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.724951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.724975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.724990] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.728565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.738027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.738462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.738494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.738512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.738749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.738990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.739014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.739029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.742605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.751857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.752268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.752299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.752317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.752555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.752797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.752820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.752836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.756410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.765877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.766307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.766339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.766362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.766601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.766843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.766866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.766881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.770463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.686 [2024-07-24 09:18:58.779717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.686 [2024-07-24 09:18:58.780113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.686 [2024-07-24 09:18:58.780145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.686 [2024-07-24 09:18:58.780163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.686 [2024-07-24 09:18:58.780401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.686 [2024-07-24 09:18:58.780643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.686 [2024-07-24 09:18:58.780667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.686 [2024-07-24 09:18:58.780682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.686 [2024-07-24 09:18:58.784260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.687 [2024-07-24 09:18:58.793742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.687 [2024-07-24 09:18:58.794143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.687 [2024-07-24 09:18:58.794175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.687 [2024-07-24 09:18:58.794193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.687 [2024-07-24 09:18:58.794432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.687 [2024-07-24 09:18:58.794673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.687 [2024-07-24 09:18:58.794697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.687 [2024-07-24 09:18:58.794712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.687 [2024-07-24 09:18:58.798288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.945 [2024-07-24 09:18:58.807748] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.945 [2024-07-24 09:18:58.808172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.945 [2024-07-24 09:18:58.808204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.945 [2024-07-24 09:18:58.808222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.945 [2024-07-24 09:18:58.808460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.945 [2024-07-24 09:18:58.808708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.945 [2024-07-24 09:18:58.808731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.945 [2024-07-24 09:18:58.808747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.945 [2024-07-24 09:18:58.812322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.945 [2024-07-24 09:18:58.821573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.945 [2024-07-24 09:18:58.822000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.945 [2024-07-24 09:18:58.822031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.945 [2024-07-24 09:18:58.822049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.945 [2024-07-24 09:18:58.822300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.945 [2024-07-24 09:18:58.822542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.945 [2024-07-24 09:18:58.822566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.945 [2024-07-24 09:18:58.822581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.945 [2024-07-24 09:18:58.826149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.945 [2024-07-24 09:18:58.835404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.945 [2024-07-24 09:18:58.835793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.945 [2024-07-24 09:18:58.835824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.945 [2024-07-24 09:18:58.835841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.945 [2024-07-24 09:18:58.836080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.945 [2024-07-24 09:18:58.836333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.945 [2024-07-24 09:18:58.836357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.945 [2024-07-24 09:18:58.836372] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.945 [2024-07-24 09:18:58.839931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.945 [2024-07-24 09:18:58.849393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.945 [2024-07-24 09:18:58.849815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.945 [2024-07-24 09:18:58.849846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.945 [2024-07-24 09:18:58.849863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.945 [2024-07-24 09:18:58.850112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.945 [2024-07-24 09:18:58.850355] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.850379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.850394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.853958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.863235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.863652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.863683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.863701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.863940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.864195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.864219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.864235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.867790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.877251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.877641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.877671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.877689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.877927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.878179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.878203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.878217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.881784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.891269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.891671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.891702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.891720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.891958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.892212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.892237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.892252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.895813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.905279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.905692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.905723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.905750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.905989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.906243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.906267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.906282] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.909846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.919095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.919490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.919521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.919539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.919777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.920018] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.920042] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.920057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.923631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.933090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.933521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.933552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.933570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.933808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.934049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.934073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.934088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.937665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.946926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.947363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.947395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.947412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.947651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.947892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.947921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.947937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.951508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.960757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.961151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.961184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.961202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.961440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.961683] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.961706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.961721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.965300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.974763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.975180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.975211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.975229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.975468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.975710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.975734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.975749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.979322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:58.988798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:58.989216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:58.989247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:58.989265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:58.989503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:58.989744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:58.989768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:58.989783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:58.993360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:59.002831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:59.003268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:59.003299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.946 [2024-07-24 09:18:59.003318] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.946 [2024-07-24 09:18:59.003555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.946 [2024-07-24 09:18:59.003797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.946 [2024-07-24 09:18:59.003820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.946 [2024-07-24 09:18:59.003836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.946 [2024-07-24 09:18:59.007411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.946 [2024-07-24 09:18:59.016662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.946 [2024-07-24 09:18:59.017091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.946 [2024-07-24 09:18:59.017129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.947 [2024-07-24 09:18:59.017148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.947 [2024-07-24 09:18:59.017386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.947 [2024-07-24 09:18:59.017628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.947 [2024-07-24 09:18:59.017651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.947 [2024-07-24 09:18:59.017666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.947 [2024-07-24 09:18:59.021235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.947 [2024-07-24 09:18:59.030484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.947 [2024-07-24 09:18:59.030897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.947 [2024-07-24 09:18:59.030928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.947 [2024-07-24 09:18:59.030946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.947 [2024-07-24 09:18:59.031196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.947 [2024-07-24 09:18:59.031439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.947 [2024-07-24 09:18:59.031463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.947 [2024-07-24 09:18:59.031478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.947 [2024-07-24 09:18:59.035040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.947 [2024-07-24 09:18:59.044512] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.947 [2024-07-24 09:18:59.044904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.947 [2024-07-24 09:18:59.044934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.947 [2024-07-24 09:18:59.044952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.947 [2024-07-24 09:18:59.045208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.947 [2024-07-24 09:18:59.045450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.947 [2024-07-24 09:18:59.045474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.947 [2024-07-24 09:18:59.045489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:20.947 [2024-07-24 09:18:59.049056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:20.947 [2024-07-24 09:18:59.058526] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:20.947 [2024-07-24 09:18:59.058951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.947 [2024-07-24 09:18:59.058982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:20.947 [2024-07-24 09:18:59.059000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:20.947 [2024-07-24 09:18:59.059250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:20.947 [2024-07-24 09:18:59.059493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:20.947 [2024-07-24 09:18:59.059517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:20.947 [2024-07-24 09:18:59.059532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.205 [2024-07-24 09:18:59.063106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.205 [2024-07-24 09:18:59.072358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.205 [2024-07-24 09:18:59.072732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.205 [2024-07-24 09:18:59.072763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.205 [2024-07-24 09:18:59.072781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.205 [2024-07-24 09:18:59.073019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.205 [2024-07-24 09:18:59.073274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.205 [2024-07-24 09:18:59.073298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.205 [2024-07-24 09:18:59.073313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.205 [2024-07-24 09:18:59.076876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.205 [2024-07-24 09:18:59.086353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.205 [2024-07-24 09:18:59.086770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.205 [2024-07-24 09:18:59.086801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.205 [2024-07-24 09:18:59.086820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.205 [2024-07-24 09:18:59.087058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.205 [2024-07-24 09:18:59.087312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.205 [2024-07-24 09:18:59.087336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.087357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.090918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.100187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.100609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.100639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.100657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.100895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.101149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.101173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.101188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.104749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.114223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.114637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.114668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.114686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.114924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.115178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.115203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.115218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.118780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.128248] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.128647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.128678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.128696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.128934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.129189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.129213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.129229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.132792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.142264] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.142656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.142693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.142712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.142950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.143205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.143229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.143244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.146802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.156281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.156683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.156714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.156732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.156970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.157223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.157248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.157264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.160828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.170301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.170714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.170745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.170763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.171001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.171255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.171279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.171294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.174858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.184326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.184716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.184747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.184765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.185003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.185264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.185289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.185304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.188885] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.198162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.198550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.198581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.198598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.198836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.199078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.199110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.199127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.202690] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.212178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.212569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.212600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.212618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.212856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.213098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.213133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.206 [2024-07-24 09:18:59.213149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.206 [2024-07-24 09:18:59.216710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.206 [2024-07-24 09:18:59.226179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.206 [2024-07-24 09:18:59.226577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.206 [2024-07-24 09:18:59.226608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.206 [2024-07-24 09:18:59.226625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.206 [2024-07-24 09:18:59.226863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.206 [2024-07-24 09:18:59.227115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.206 [2024-07-24 09:18:59.227139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.227155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.230724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.207 [2024-07-24 09:18:59.240217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.207 [2024-07-24 09:18:59.240582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.207 [2024-07-24 09:18:59.240613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.207 [2024-07-24 09:18:59.240631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.207 [2024-07-24 09:18:59.240869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.207 [2024-07-24 09:18:59.241123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.207 [2024-07-24 09:18:59.241148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.241163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.244725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.207 [2024-07-24 09:18:59.254197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.207 [2024-07-24 09:18:59.254687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.207 [2024-07-24 09:18:59.254718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.207 [2024-07-24 09:18:59.254736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.207 [2024-07-24 09:18:59.254975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.207 [2024-07-24 09:18:59.255228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.207 [2024-07-24 09:18:59.255253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.255268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.258830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.207 [2024-07-24 09:18:59.268089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.207 [2024-07-24 09:18:59.268602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.207 [2024-07-24 09:18:59.268655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.207 [2024-07-24 09:18:59.268674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.207 [2024-07-24 09:18:59.268912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.207 [2024-07-24 09:18:59.269166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.207 [2024-07-24 09:18:59.269190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.269206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.272766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.207 [2024-07-24 09:18:59.282020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.207 [2024-07-24 09:18:59.282451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.207 [2024-07-24 09:18:59.282481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.207 [2024-07-24 09:18:59.282504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.207 [2024-07-24 09:18:59.282744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.207 [2024-07-24 09:18:59.282985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.207 [2024-07-24 09:18:59.283008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.283024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.286614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.207 [2024-07-24 09:18:59.295872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.207 [2024-07-24 09:18:59.296291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.207 [2024-07-24 09:18:59.296322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.207 [2024-07-24 09:18:59.296340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.207 [2024-07-24 09:18:59.296578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.207 [2024-07-24 09:18:59.296820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.207 [2024-07-24 09:18:59.296842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.296858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.300449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.207 [2024-07-24 09:18:59.309706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.207 [2024-07-24 09:18:59.310184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.207 [2024-07-24 09:18:59.310217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.207 [2024-07-24 09:18:59.310235] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.207 [2024-07-24 09:18:59.310477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.207 [2024-07-24 09:18:59.310719] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.207 [2024-07-24 09:18:59.310743] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.207 [2024-07-24 09:18:59.310758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.207 [2024-07-24 09:18:59.314335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.466 [2024-07-24 09:18:59.323595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.466 [2024-07-24 09:18:59.324036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.466 [2024-07-24 09:18:59.324067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.466 [2024-07-24 09:18:59.324086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.466 [2024-07-24 09:18:59.324333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.466 [2024-07-24 09:18:59.324575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.466 [2024-07-24 09:18:59.324605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.466 [2024-07-24 09:18:59.324621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.466 [2024-07-24 09:18:59.328197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.466 [2024-07-24 09:18:59.337455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.466 [2024-07-24 09:18:59.337867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.466 [2024-07-24 09:18:59.337898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.466 [2024-07-24 09:18:59.337916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.466 [2024-07-24 09:18:59.338166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.466 [2024-07-24 09:18:59.338409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.466 [2024-07-24 09:18:59.338432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.466 [2024-07-24 09:18:59.338448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.466 [2024-07-24 09:18:59.342004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.466 [2024-07-24 09:18:59.351462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.466 [2024-07-24 09:18:59.351887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.466 [2024-07-24 09:18:59.351918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.466 [2024-07-24 09:18:59.351936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.352183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.352426] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.352449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.352465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.356024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.365497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.365901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.365933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.365950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.366199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.366450] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.366474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.366489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.370072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.379354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.379777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.379809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.379827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.380066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.380316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.380341] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.380356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.383920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.393202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.393618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.393650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.393667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.393906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.394161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.394185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.394201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.397763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.407237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.407606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.407637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.407654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.407892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.408148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.408192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.408208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.411783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.421242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.421632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.421663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.421686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.421925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.422179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.422203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.422218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.425778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.435242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.435609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.435640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.435658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.435896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.436150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.436175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.436190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.439754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.449311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.449734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.449766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.449783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.450022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.450273] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.450297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.450312] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.453872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.463156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.463558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.463590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.463608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.463846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.464087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.464129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.464155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.467720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.477193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.477599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.477630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.477648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.477886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.467 [2024-07-24 09:18:59.478138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.467 [2024-07-24 09:18:59.478162] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.467 [2024-07-24 09:18:59.478178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.467 [2024-07-24 09:18:59.481740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.467 [2024-07-24 09:18:59.491251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.467 [2024-07-24 09:18:59.491646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.467 [2024-07-24 09:18:59.491677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.467 [2024-07-24 09:18:59.491695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.467 [2024-07-24 09:18:59.491933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.492186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.492210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.492225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.495796] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.468 [2024-07-24 09:18:59.505294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.468 [2024-07-24 09:18:59.505691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.468 [2024-07-24 09:18:59.505722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.468 [2024-07-24 09:18:59.505740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.468 [2024-07-24 09:18:59.505978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.506233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.506257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.506272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.509839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.468 [2024-07-24 09:18:59.519336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.468 [2024-07-24 09:18:59.519803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.468 [2024-07-24 09:18:59.519854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.468 [2024-07-24 09:18:59.519878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.468 [2024-07-24 09:18:59.520130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.520374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.520399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.520414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.523977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.468 [2024-07-24 09:18:59.533272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.468 [2024-07-24 09:18:59.533694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.468 [2024-07-24 09:18:59.533725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.468 [2024-07-24 09:18:59.533743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.468 [2024-07-24 09:18:59.533981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.534234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.534259] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.534274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.537837] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.468 [2024-07-24 09:18:59.547110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.468 [2024-07-24 09:18:59.547534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.468 [2024-07-24 09:18:59.547565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.468 [2024-07-24 09:18:59.547583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.468 [2024-07-24 09:18:59.547821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.548063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.548086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.548110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.551677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.468 [2024-07-24 09:18:59.560944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.468 [2024-07-24 09:18:59.561385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.468 [2024-07-24 09:18:59.561417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.468 [2024-07-24 09:18:59.561435] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.468 [2024-07-24 09:18:59.561679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.561921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.561945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.561960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.565541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.468 [2024-07-24 09:18:59.574830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.468 [2024-07-24 09:18:59.575235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.468 [2024-07-24 09:18:59.575267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.468 [2024-07-24 09:18:59.575285] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.468 [2024-07-24 09:18:59.575523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.468 [2024-07-24 09:18:59.575765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.468 [2024-07-24 09:18:59.575789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.468 [2024-07-24 09:18:59.575804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.468 [2024-07-24 09:18:59.579377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.727 [2024-07-24 09:18:59.588860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.727 [2024-07-24 09:18:59.589297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-24 09:18:59.589328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.727 [2024-07-24 09:18:59.589346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.727 [2024-07-24 09:18:59.589584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.727 [2024-07-24 09:18:59.589826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.727 [2024-07-24 09:18:59.589849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.727 [2024-07-24 09:18:59.589865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.727 [2024-07-24 09:18:59.593436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.727 [2024-07-24 09:18:59.602689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.727 [2024-07-24 09:18:59.603116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-24 09:18:59.603149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.727 [2024-07-24 09:18:59.603167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.727 [2024-07-24 09:18:59.603405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.727 [2024-07-24 09:18:59.603646] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.727 [2024-07-24 09:18:59.603669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.727 [2024-07-24 09:18:59.603694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.727 [2024-07-24 09:18:59.607281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.727 [2024-07-24 09:18:59.616532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.727 [2024-07-24 09:18:59.616953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-24 09:18:59.616985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.727 [2024-07-24 09:18:59.617003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.727 [2024-07-24 09:18:59.617251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.727 [2024-07-24 09:18:59.617493] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.727 [2024-07-24 09:18:59.617516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.727 [2024-07-24 09:18:59.617532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.727 [2024-07-24 09:18:59.621089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.727 [2024-07-24 09:18:59.630547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.727 [2024-07-24 09:18:59.630980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.727 [2024-07-24 09:18:59.631011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.727 [2024-07-24 09:18:59.631028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.727 [2024-07-24 09:18:59.631275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.727 [2024-07-24 09:18:59.631518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.727 [2024-07-24 09:18:59.631541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.727 [2024-07-24 09:18:59.631556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.635124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.644372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.644787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.644818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.644835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.645073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.645325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.645350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.645365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.648924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.658385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.658812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.658848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.658867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.659117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.659359] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.659384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.659399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.662964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.672237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.672629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.672661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.672679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.672918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.673172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.673196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.673212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.676772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.686232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.686647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.686678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.686696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.686933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.687186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.687211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.687226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.690797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.700262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.700683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.700714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.700732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.700969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.701228] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.701253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.701269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.704830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.714291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.714692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.714723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.714741] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.714979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.715233] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.715257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.715273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.718830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.728364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.728756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.728787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.728804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.729043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.729294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.729318] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.729333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.732891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.742350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.742753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.742784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.742802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.743040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.743292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.743317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.743332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.746897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.756353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.756749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.756779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.756797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.757034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.757287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.728 [2024-07-24 09:18:59.757312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.728 [2024-07-24 09:18:59.757327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.728 [2024-07-24 09:18:59.760887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.728 [2024-07-24 09:18:59.770361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.728 [2024-07-24 09:18:59.770756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.728 [2024-07-24 09:18:59.770787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.728 [2024-07-24 09:18:59.770804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.728 [2024-07-24 09:18:59.771042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.728 [2024-07-24 09:18:59.771295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.729 [2024-07-24 09:18:59.771320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.729 [2024-07-24 09:18:59.771335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.729 [2024-07-24 09:18:59.774893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.729 [2024-07-24 09:18:59.784358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.729 [2024-07-24 09:18:59.784750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-24 09:18:59.784781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.729 [2024-07-24 09:18:59.784799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.729 [2024-07-24 09:18:59.785037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.729 [2024-07-24 09:18:59.785289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.729 [2024-07-24 09:18:59.785313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.729 [2024-07-24 09:18:59.785328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.729 [2024-07-24 09:18:59.788900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.729 [2024-07-24 09:18:59.798362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.729 [2024-07-24 09:18:59.798779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-24 09:18:59.798810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.729 [2024-07-24 09:18:59.798833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.729 [2024-07-24 09:18:59.799073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.729 [2024-07-24 09:18:59.799324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.729 [2024-07-24 09:18:59.799349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.729 [2024-07-24 09:18:59.799364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.729 [2024-07-24 09:18:59.802920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.729 [2024-07-24 09:18:59.812379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.729 [2024-07-24 09:18:59.812749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-24 09:18:59.812781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.729 [2024-07-24 09:18:59.812799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.729 [2024-07-24 09:18:59.813036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.729 [2024-07-24 09:18:59.813289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.729 [2024-07-24 09:18:59.813314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.729 [2024-07-24 09:18:59.813329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.729 [2024-07-24 09:18:59.816886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.729 [2024-07-24 09:18:59.826345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.729 [2024-07-24 09:18:59.826754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-24 09:18:59.826785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.729 [2024-07-24 09:18:59.826803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.729 [2024-07-24 09:18:59.827040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.729 [2024-07-24 09:18:59.827295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.729 [2024-07-24 09:18:59.827319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.729 [2024-07-24 09:18:59.827334] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.729 [2024-07-24 09:18:59.830891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.729 [2024-07-24 09:18:59.840349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.729 [2024-07-24 09:18:59.840765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.729 [2024-07-24 09:18:59.840796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.729 [2024-07-24 09:18:59.840814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.729 [2024-07-24 09:18:59.841052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.729 [2024-07-24 09:18:59.841304] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.729 [2024-07-24 09:18:59.841334] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.729 [2024-07-24 09:18:59.841350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.844907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.854364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.854780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.854811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.854829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.855067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.855319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.855343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.855358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.858913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.868378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.868818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.868849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.868867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.869114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.869357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.869381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.869396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.872951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.882404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.882815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.882846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.882863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.883111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.883354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.883378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.883393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.886952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.896231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.896639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.896669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.896687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.896925] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.897178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.897202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.897218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.900774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.910235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.910653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.910683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.910701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.910939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.911192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.911216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.911231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.914786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.924244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.924675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.924706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.924724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.924962] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.925214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.925238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.925253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.928807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.938061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.938464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.938496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.938514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.938758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.938999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.939023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.939038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.942606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.952062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.952531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.952548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.952786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.953028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.953050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.953066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.956634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.965886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.988 [2024-07-24 09:18:59.966288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.988 [2024-07-24 09:18:59.966319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.988 [2024-07-24 09:18:59.966337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.988 [2024-07-24 09:18:59.966575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.988 [2024-07-24 09:18:59.966817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.988 [2024-07-24 09:18:59.966840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.988 [2024-07-24 09:18:59.966855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.988 [2024-07-24 09:18:59.970423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.988 [2024-07-24 09:18:59.979874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:18:59.980278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:18:59.980309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:18:59.980326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:18:59.980564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:18:59.980805] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:18:59.980828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:18:59.980850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:18:59.984420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:18:59.993879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:18:59.994280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:18:59.994311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:18:59.994329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:18:59.994566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:18:59.994808] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:18:59.994831] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:18:59.994846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:18:59.998414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.007874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.008252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.008284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.008302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.008540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.008783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.008806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.008822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.012388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.021490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.021934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.021977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.021994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.022225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.022446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.022466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.022480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.025530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.034866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.035295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.035327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.035344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.035585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.035791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.035812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.035825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.038889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.048131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.048533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.048577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.048592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.048827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.049025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.049045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.049057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.052237] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.061571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.062015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.062043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.062059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.062305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.062519] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.062555] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.062569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.065656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.074906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.075289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.075318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.075334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.075584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.075783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.075802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.075815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.078791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.088198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.088628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.088655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.088687] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.088926] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.089186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.089209] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.089222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:21.989 [2024-07-24 09:19:00.092291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:21.989 [2024-07-24 09:19:00.101789] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:21.989 [2024-07-24 09:19:00.102239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.989 [2024-07-24 09:19:00.102268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:21.989 [2024-07-24 09:19:00.102284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:21.989 [2024-07-24 09:19:00.102525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:21.989 [2024-07-24 09:19:00.102773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:21.989 [2024-07-24 09:19:00.102794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:21.989 [2024-07-24 09:19:00.102807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.248 [2024-07-24 09:19:00.105902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.248 [2024-07-24 09:19:00.114971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.248 [2024-07-24 09:19:00.115398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.248 [2024-07-24 09:19:00.115427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.115443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.115696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.115895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.115915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.115933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.118965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.128190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.128636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.128677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.128693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.128946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.129171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.129192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.129205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.132170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.141485] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.141947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.141989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.142006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.142259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.142479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.142498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.142510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.145560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.154800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.155220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.155249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.155265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.155508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.155706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.155725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.155737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.158724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.168215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.168670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.168702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.168718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.168974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.169219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.169240] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.169253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.172226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.181589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.181967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.182009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.182025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.182289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.182507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.182527] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.182539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.185514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.194835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.195225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.195253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.195269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.195512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.195710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.195730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.195742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.198887] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.208510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.208879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.208907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.208924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.209162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.209378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.209412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.209425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.212521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.221797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.222200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.222228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.222244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.222472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.222687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.222706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.222719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.225810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.234985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.235392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.235435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.249 [2024-07-24 09:19:00.235451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.249 [2024-07-24 09:19:00.235719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.249 [2024-07-24 09:19:00.235918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.249 [2024-07-24 09:19:00.235938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.249 [2024-07-24 09:19:00.235950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.249 [2024-07-24 09:19:00.238966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.249 [2024-07-24 09:19:00.248354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.249 [2024-07-24 09:19:00.248765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.249 [2024-07-24 09:19:00.248794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.248811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.249067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.249295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.249316] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.249329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.252303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.261595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.262009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.262035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.262066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.262323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.262540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.262559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.262571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.265543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.274770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.275171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.275200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.275216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.275445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.275658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.275678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.275690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.278671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.287939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.288371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.288400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.288416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.288657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.288871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.288891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.288904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.291930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.301303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.301675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.301702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.301725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.301934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.302177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.302198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.302211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.305260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.314555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.314972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.315000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.315016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.315255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.315488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.315507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.315520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.318544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.327889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.328294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.328322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.328338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.328582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.328797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.328817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.328829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.331805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.341127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.341644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.341685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.341702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.341951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.342179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.342205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.342218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.345228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.250 [2024-07-24 09:19:00.354387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.250 [2024-07-24 09:19:00.354750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.250 [2024-07-24 09:19:00.354777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.250 [2024-07-24 09:19:00.354793] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.250 [2024-07-24 09:19:00.355016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.250 [2024-07-24 09:19:00.355261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.250 [2024-07-24 09:19:00.355282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.250 [2024-07-24 09:19:00.355295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.250 [2024-07-24 09:19:00.358267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.367769] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.368140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.368169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.368185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.368399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.368657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.368694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.368708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.371923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.381189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.381599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.381626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.381642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.381863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.382120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.382142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.382170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.385309] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.394654] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.395052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.395080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.395097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.395334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.395551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.395571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.395583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.398590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.407926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.408381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.408409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.408425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.408659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.408859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.408878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.408891] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.411863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.421232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.421667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.421694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.421725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.421981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.422208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.422228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.422242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.425252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.434571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.435038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.435065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.435082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.435346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.435563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.435582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.435594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.438748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.447930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.448334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.448363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.448379] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.448608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.448822] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.448841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.448853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.452133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.461489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.461969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.461998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.462014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.462258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.462497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.462517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.462529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.465810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.474910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.475297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.475326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.510 [2024-07-24 09:19:00.475342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.510 [2024-07-24 09:19:00.475585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.510 [2024-07-24 09:19:00.475783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.510 [2024-07-24 09:19:00.475803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.510 [2024-07-24 09:19:00.475820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.510 [2024-07-24 09:19:00.478851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.510 [2024-07-24 09:19:00.488095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.510 [2024-07-24 09:19:00.488494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.510 [2024-07-24 09:19:00.488521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.488536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.488771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.488970] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.488989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.489002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.492023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.501420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.501901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.501928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.501959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.502208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.502413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.502447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.502460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.505516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.514697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.515058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.515086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.515130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.515373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.515588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.515607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.515620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.518631] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.527994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.528491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.528518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.528550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.528804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.529003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.529022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.529034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.532013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.541363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.541746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.541774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.541790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.542031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.542272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.542293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.542306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.545326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.554687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.555134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.555165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.555181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.555415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.555631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.555650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.555663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.558638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.567913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.568298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.568340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.568356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.568611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.568814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.568834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.568846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.571906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.581156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.581563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.581606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.581622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.581877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.582076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.582095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.582132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.585084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.511 [2024-07-24 09:19:00.594476] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.511 [2024-07-24 09:19:00.594891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.511 [2024-07-24 09:19:00.594918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.511 [2024-07-24 09:19:00.594949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.511 [2024-07-24 09:19:00.595199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.511 [2024-07-24 09:19:00.595419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.511 [2024-07-24 09:19:00.595438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.511 [2024-07-24 09:19:00.595451] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.511 [2024-07-24 09:19:00.598421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.512 [2024-07-24 09:19:00.607732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.512 [2024-07-24 09:19:00.608133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.512 [2024-07-24 09:19:00.608161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.512 [2024-07-24 09:19:00.608178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.512 [2024-07-24 09:19:00.608410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.512 [2024-07-24 09:19:00.608625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.512 [2024-07-24 09:19:00.608644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.512 [2024-07-24 09:19:00.608657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.512 [2024-07-24 09:19:00.611633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.512 [2024-07-24 09:19:00.621181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.512 [2024-07-24 09:19:00.621531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.512 [2024-07-24 09:19:00.621560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.512 [2024-07-24 09:19:00.621577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.512 [2024-07-24 09:19:00.621804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.512 [2024-07-24 09:19:00.622021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.512 [2024-07-24 09:19:00.622041] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.512 [2024-07-24 09:19:00.622053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.771 [2024-07-24 09:19:00.625379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.771 [2024-07-24 09:19:00.634513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.771 [2024-07-24 09:19:00.634885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.771 [2024-07-24 09:19:00.634928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.771 [2024-07-24 09:19:00.634943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.771 [2024-07-24 09:19:00.635215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.771 [2024-07-24 09:19:00.635460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.771 [2024-07-24 09:19:00.635481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.771 [2024-07-24 09:19:00.635494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.771 [2024-07-24 09:19:00.638476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.771 [2024-07-24 09:19:00.647722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.771 [2024-07-24 09:19:00.648140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.771 [2024-07-24 09:19:00.648168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.771 [2024-07-24 09:19:00.648200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.771 [2024-07-24 09:19:00.648451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.771 [2024-07-24 09:19:00.648649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.771 [2024-07-24 09:19:00.648668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.771 [2024-07-24 09:19:00.648681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.771 [2024-07-24 09:19:00.651760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.771 [2024-07-24 09:19:00.661028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.771 [2024-07-24 09:19:00.661430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.771 [2024-07-24 09:19:00.661463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.771 [2024-07-24 09:19:00.661480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.771 [2024-07-24 09:19:00.661716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.771 [2024-07-24 09:19:00.661914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.771 [2024-07-24 09:19:00.661933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.771 [2024-07-24 09:19:00.661946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.771 [2024-07-24 09:19:00.664926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.771 [2024-07-24 09:19:00.674275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.771 [2024-07-24 09:19:00.674759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.771 [2024-07-24 09:19:00.674787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.771 [2024-07-24 09:19:00.674803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.771 [2024-07-24 09:19:00.675044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.771 [2024-07-24 09:19:00.675291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.771 [2024-07-24 09:19:00.675312] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.771 [2024-07-24 09:19:00.675326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.771 [2024-07-24 09:19:00.678303] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.771 [2024-07-24 09:19:00.687564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.771 [2024-07-24 09:19:00.687962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.771 [2024-07-24 09:19:00.687990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.771 [2024-07-24 09:19:00.688006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.771 [2024-07-24 09:19:00.688234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.771 [2024-07-24 09:19:00.688453] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.771 [2024-07-24 09:19:00.688474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.771 [2024-07-24 09:19:00.688502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.771 [2024-07-24 09:19:00.691573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.771 [2024-07-24 09:19:00.700790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.771 [2024-07-24 09:19:00.701211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.771 [2024-07-24 09:19:00.701253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.701269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.701520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.701745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.701765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.701778] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.705244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.714073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.714486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.714514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.714531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.714772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.714987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.715007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.715019] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.718037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.727329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.727759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.727786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.727816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.728070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.728298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.728319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.728332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.731332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.740650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.741018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.741046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.741063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.741299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.741520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.741539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.741551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.744523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.754024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.754450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.754478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.754495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.754735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.754949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.754968] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.754981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.757951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.767967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.768415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.768457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.768474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.768729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.768972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.768995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.769010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.772582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.781833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.782263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.782293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.782311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.782549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.782790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.782814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.782829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.786396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.795661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.796049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.796080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.796110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.796352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.796594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.796617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.796633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.800207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.809690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.810111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.810143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.810161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.810398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.810640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.810663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.772 [2024-07-24 09:19:00.810678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.772 [2024-07-24 09:19:00.814249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.772 [2024-07-24 09:19:00.823520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.772 [2024-07-24 09:19:00.823915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.772 [2024-07-24 09:19:00.823947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.772 [2024-07-24 09:19:00.823965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.772 [2024-07-24 09:19:00.824213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.772 [2024-07-24 09:19:00.824456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.772 [2024-07-24 09:19:00.824479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.773 [2024-07-24 09:19:00.824494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.773 [2024-07-24 09:19:00.828058] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.773 [2024-07-24 09:19:00.837531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.773 [2024-07-24 09:19:00.838014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.773 [2024-07-24 09:19:00.838045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.773 [2024-07-24 09:19:00.838063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.773 [2024-07-24 09:19:00.838313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.773 [2024-07-24 09:19:00.838556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.773 [2024-07-24 09:19:00.838585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.773 [2024-07-24 09:19:00.838601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.773 [2024-07-24 09:19:00.842173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.773 [2024-07-24 09:19:00.851424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.773 [2024-07-24 09:19:00.851919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.773 [2024-07-24 09:19:00.851974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.773 [2024-07-24 09:19:00.851992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.773 [2024-07-24 09:19:00.852241] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.773 [2024-07-24 09:19:00.852483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.773 [2024-07-24 09:19:00.852507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.773 [2024-07-24 09:19:00.852522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.773 [2024-07-24 09:19:00.856087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.773 [2024-07-24 09:19:00.865356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.773 [2024-07-24 09:19:00.865773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.773 [2024-07-24 09:19:00.865804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.773 [2024-07-24 09:19:00.865822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.773 [2024-07-24 09:19:00.866061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.773 [2024-07-24 09:19:00.866314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.773 [2024-07-24 09:19:00.866338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.773 [2024-07-24 09:19:00.866353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.773 [2024-07-24 09:19:00.869916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:22.773 [2024-07-24 09:19:00.879384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:22.773 [2024-07-24 09:19:00.879775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:22.773 [2024-07-24 09:19:00.879807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:22.773 [2024-07-24 09:19:00.879824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:22.773 [2024-07-24 09:19:00.880062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:22.773 [2024-07-24 09:19:00.880316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:22.773 [2024-07-24 09:19:00.880340] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:22.773 [2024-07-24 09:19:00.880355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:22.773 [2024-07-24 09:19:00.883918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.893397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.893826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.893857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.893875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.894124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.894366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.894390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.894405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.897966] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.907437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.907836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.907867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.907885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.908134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.908376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.908399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.908414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.911977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.921446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.921876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.921908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.921926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.922176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.922419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.922442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.922457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.926022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.935284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.935675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.935706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.935724] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.935968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.936222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.936246] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.936261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.939822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.949290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.949716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.949746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.949763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.950001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.950255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.950279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.950294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.953858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.963138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.963552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.963582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.963600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.963837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.964079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.964111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.964129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.967694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.977166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.977604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.977635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.977654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.977892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.978146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.978170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.978194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.981758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:00.991017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:00.991439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:00.991470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:00.991488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:00.991727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:00.991968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:00.991991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:00.992006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:00.995588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:01.004846] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:01.005271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:01.005302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:01.005320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:01.005559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:01.005801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.032 [2024-07-24 09:19:01.005825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.032 [2024-07-24 09:19:01.005840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.032 [2024-07-24 09:19:01.009416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.032 [2024-07-24 09:19:01.018674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.032 [2024-07-24 09:19:01.019090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.032 [2024-07-24 09:19:01.019128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.032 [2024-07-24 09:19:01.019146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.032 [2024-07-24 09:19:01.019384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.032 [2024-07-24 09:19:01.019625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.019647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.019662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.023235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.032699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.033117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.033153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.033172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.033410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.033652] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.033676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.033691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.037262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.046727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.047125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.047157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.047175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.047413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.047655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.047678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.047693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.051268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.060731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.061126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.061157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.061175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.061413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.061656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.061680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.061695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.065272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.074727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.075158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.075189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.075207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.075446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.075693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.075717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.075732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.079305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.088761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.089174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.089205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.089222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.089460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.089702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.089726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.089741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.093328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.102788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.103201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.103232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.103250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.103488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.103730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.103753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.103769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.107342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.116808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.117233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.117264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.117282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.117520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.117762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.117786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.117801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.121380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.130635] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.131028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.131059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.131077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.131325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.131568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.131592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.131607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.033 [2024-07-24 09:19:01.135177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.033 [2024-07-24 09:19:01.144642] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.033 [2024-07-24 09:19:01.145039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.033 [2024-07-24 09:19:01.145071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.033 [2024-07-24 09:19:01.145089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.033 [2024-07-24 09:19:01.145338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.033 [2024-07-24 09:19:01.145580] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.033 [2024-07-24 09:19:01.145603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.033 [2024-07-24 09:19:01.145619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.292 [2024-07-24 09:19:01.149191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.292 [2024-07-24 09:19:01.158650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.292 [2024-07-24 09:19:01.159043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.292 [2024-07-24 09:19:01.159074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.292 [2024-07-24 09:19:01.159092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.292 [2024-07-24 09:19:01.159340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.159582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.159605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.159620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.163194] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.172651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.173044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.173074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.173098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.173350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.173591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.173615] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.173630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.177200] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.186673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.187073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.187112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.187132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.187371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.187613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.187636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.187651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.191221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.200693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.201093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.201132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.201150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.201388] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.201630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.201654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.201668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.205238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.214701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.215185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.215216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.215233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.215471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.215713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.215742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.215757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.219327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.228583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.229002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.229033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.229051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.229301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.229543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.229567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.229582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.233152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.242613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.243005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.243036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.243054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.243302] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.243544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.243568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.243583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.247153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.256615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.257026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.257057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.257075] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.257324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.257567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.257591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.257606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.261173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.270639] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.271069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.271100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.271128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.271367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.271608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.271632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.271647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.275216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.284474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.284869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.284900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.284918] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.285168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.293 [2024-07-24 09:19:01.285410] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.293 [2024-07-24 09:19:01.285433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.293 [2024-07-24 09:19:01.285448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.293 [2024-07-24 09:19:01.289011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.293 [2024-07-24 09:19:01.298491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.293 [2024-07-24 09:19:01.298892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.293 [2024-07-24 09:19:01.298923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.293 [2024-07-24 09:19:01.298941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.293 [2024-07-24 09:19:01.299191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.299434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.299457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.299472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.303032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.312504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.312892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.312923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.312946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.313197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.313439] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.313463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.313478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.317039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.326509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.326905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.326936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.326953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.327202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.327445] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.327468] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.327484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.331045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.340509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.340921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.340952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.340970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.341221] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.341464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.341487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.341502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.345064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.354530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.354944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.354975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.354993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.355242] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.355484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.355512] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.355528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.359090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.368561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.368974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.369005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.369023] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.369272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.369514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.369538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.369553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.373120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.382583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.382972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.383003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.383021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.383271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.383513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.383537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.383552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.387126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.294 [2024-07-24 09:19:01.396615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.294 [2024-07-24 09:19:01.396984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.294 [2024-07-24 09:19:01.397015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.294 [2024-07-24 09:19:01.397033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.294 [2024-07-24 09:19:01.397280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.294 [2024-07-24 09:19:01.397522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.294 [2024-07-24 09:19:01.397545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.294 [2024-07-24 09:19:01.397561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.294 [2024-07-24 09:19:01.401131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 [2024-07-24 09:19:01.410595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.411021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.411052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.411069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.411315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.554 [2024-07-24 09:19:01.411557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.554 [2024-07-24 09:19:01.411581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.554 [2024-07-24 09:19:01.411596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.554 [2024-07-24 09:19:01.415166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3920922 Killed "${NVMF_APP[@]}" "$@" 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3921871 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3921871 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3921871 ']' 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:23.554 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.554 [2024-07-24 09:19:01.424633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.425029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.425060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.425078] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.425327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.554 [2024-07-24 09:19:01.425570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.554 [2024-07-24 09:19:01.425593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.554 [2024-07-24 09:19:01.425608] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.554 [2024-07-24 09:19:01.429179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 [2024-07-24 09:19:01.438660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.439054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.439085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.439113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.439354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.554 [2024-07-24 09:19:01.439596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.554 [2024-07-24 09:19:01.439620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.554 [2024-07-24 09:19:01.439635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.554 [2024-07-24 09:19:01.443208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 [2024-07-24 09:19:01.452285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.452691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.452719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.452736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.452965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.554 [2024-07-24 09:19:01.453208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.554 [2024-07-24 09:19:01.453231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.554 [2024-07-24 09:19:01.453245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.554 [2024-07-24 09:19:01.456411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 [2024-07-24 09:19:01.465608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.465962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.465988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.466003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.466249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.554 [2024-07-24 09:19:01.466469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.554 [2024-07-24 09:19:01.466488] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.554 [2024-07-24 09:19:01.466501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.554 [2024-07-24 09:19:01.468758] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:23.554 [2024-07-24 09:19:01.468829] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.554 [2024-07-24 09:19:01.469536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 [2024-07-24 09:19:01.478880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.479355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.479383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.479399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.479641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.554 [2024-07-24 09:19:01.479840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.554 [2024-07-24 09:19:01.479859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.554 [2024-07-24 09:19:01.479871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.554 [2024-07-24 09:19:01.483110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.554 [2024-07-24 09:19:01.492251] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.554 [2024-07-24 09:19:01.492730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.554 [2024-07-24 09:19:01.492758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.554 [2024-07-24 09:19:01.492774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.554 [2024-07-24 09:19:01.493026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.493255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.493276] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.493289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.496284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 EAL: No free 2048 kB hugepages reported on node 1 00:33:23.555 [2024-07-24 09:19:01.505540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.505957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.505984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.506001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.506269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.506486] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.506506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.506518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.508941] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:23.555 [2024-07-24 09:19:01.509864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.519557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.519941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.519968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.519988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.520243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.520470] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.520490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.520503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.524072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.533503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.533919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.533950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.533968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.534239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.534459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.534497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.534512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.538017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.539188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:23.555 [2024-07-24 09:19:01.547447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.547986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.548024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.548046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.548337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.548594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.548619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.548637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.552145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.561291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.561813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.561858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.561877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.562173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.562397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.562437] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.562454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.565976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.575122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.575573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.575605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.575623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.575862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.576115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.576153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.576168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.579666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.588978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.589524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.589562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.589583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.589828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.590080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.590115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.590150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.593652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.602793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.603448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.603497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.603518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.603793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.604040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.604064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.604082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.607633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.616693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.555 [2024-07-24 09:19:01.617135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.555 [2024-07-24 09:19:01.617166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.555 [2024-07-24 09:19:01.617184] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.555 [2024-07-24 09:19:01.617421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.555 [2024-07-24 09:19:01.617687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.555 [2024-07-24 09:19:01.617711] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.555 [2024-07-24 09:19:01.617727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.555 [2024-07-24 09:19:01.621225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.555 [2024-07-24 09:19:01.627870] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.555 [2024-07-24 09:19:01.627906] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.555 [2024-07-24 09:19:01.627923] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.555 [2024-07-24 09:19:01.627936] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.556 [2024-07-24 09:19:01.627949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.556 [2024-07-24 09:19:01.628036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:23.556 [2024-07-24 09:19:01.628217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:23.556 [2024-07-24 09:19:01.628221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.556 [2024-07-24 09:19:01.630337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.556 [2024-07-24 09:19:01.630751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.556 [2024-07-24 09:19:01.630781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.556 [2024-07-24 09:19:01.630799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.556 [2024-07-24 09:19:01.631030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.556 [2024-07-24 09:19:01.631275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.556 [2024-07-24 09:19:01.631299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.556 [2024-07-24 09:19:01.631314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.556 [2024-07-24 09:19:01.634550] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.556 [2024-07-24 09:19:01.643799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.556 [2024-07-24 09:19:01.644325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.556 [2024-07-24 09:19:01.644363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.556 [2024-07-24 09:19:01.644394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.556 [2024-07-24 09:19:01.644642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.556 [2024-07-24 09:19:01.644875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.556 [2024-07-24 09:19:01.644898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.556 [2024-07-24 09:19:01.644914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.556 [2024-07-24 09:19:01.648079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.556 [2024-07-24 09:19:01.657362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.556 [2024-07-24 09:19:01.657892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.556 [2024-07-24 09:19:01.657949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.556 [2024-07-24 09:19:01.657969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.556 [2024-07-24 09:19:01.658251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.556 [2024-07-24 09:19:01.658490] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.556 [2024-07-24 09:19:01.658513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.556 [2024-07-24 09:19:01.658530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.556 [2024-07-24 09:19:01.661734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.671044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.671558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.671596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.671616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.671839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.672077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.672110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.672129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.815 [2024-07-24 09:19:01.675471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.684684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.685153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.685190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.685210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.685432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.685666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.685687] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.685703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.815 [2024-07-24 09:19:01.688877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.698190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.698707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.698746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.698767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.699009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.699235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.699257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.699274] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.815 [2024-07-24 09:19:01.702514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.711710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.712205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.712240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.712259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.712496] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.712710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.712731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.712746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.815 [2024-07-24 09:19:01.716038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.725369] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.725721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.725750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.725766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.725982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.726212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.726234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.726249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.815 [2024-07-24 09:19:01.729500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.738885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.739272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.739300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.739324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.739539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.739756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.739778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.739791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.815 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:23.815 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:23.815 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:23.815 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:23.815 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.815 [2024-07-24 09:19:01.743018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.815 [2024-07-24 09:19:01.752367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.815 [2024-07-24 09:19:01.752740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.815 [2024-07-24 09:19:01.752768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.815 [2024-07-24 09:19:01.752784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.815 [2024-07-24 09:19:01.753012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.815 [2024-07-24 09:19:01.753255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.815 [2024-07-24 09:19:01.753278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.815 [2024-07-24 09:19:01.753291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 [2024-07-24 09:19:01.756584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 [2024-07-24 09:19:01.765891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 [2024-07-24 09:19:01.766287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.816 [2024-07-24 09:19:01.766317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.816 [2024-07-24 09:19:01.766334] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.816 [2024-07-24 09:19:01.766549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.816 [2024-07-24 09:19:01.766776] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.816 [2024-07-24 09:19:01.766797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.816 [2024-07-24 09:19:01.766811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.816 [2024-07-24 09:19:01.770034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 [2024-07-24 09:19:01.772269] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.816 [2024-07-24 09:19:01.779357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 [2024-07-24 09:19:01.779764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.816 [2024-07-24 09:19:01.779808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.816 [2024-07-24 09:19:01.779824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.816 [2024-07-24 09:19:01.780065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.816 [2024-07-24 09:19:01.780306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.816 [2024-07-24 09:19:01.780329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.816 [2024-07-24 09:19:01.780342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 [2024-07-24 09:19:01.783545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 [2024-07-24 09:19:01.792863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 [2024-07-24 09:19:01.793283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.816 [2024-07-24 09:19:01.793313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.816 [2024-07-24 09:19:01.793330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.816 [2024-07-24 09:19:01.793559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.816 [2024-07-24 09:19:01.793770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.816 [2024-07-24 09:19:01.793791] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.816 [2024-07-24 09:19:01.793805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.816 [2024-07-24 09:19:01.797073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 [2024-07-24 09:19:01.806529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 [2024-07-24 09:19:01.807022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.816 [2024-07-24 09:19:01.807060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.816 [2024-07-24 09:19:01.807080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.816 [2024-07-24 09:19:01.807315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.816 [2024-07-24 09:19:01.807549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.816 [2024-07-24 09:19:01.807572] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.816 [2024-07-24 09:19:01.807588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 [2024-07-24 09:19:01.810854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 Malloc0 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.816 [2024-07-24 09:19:01.820110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 [2024-07-24 09:19:01.820556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.816 [2024-07-24 09:19:01.820615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.816 [2024-07-24 09:19:01.820634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same with the state(5) to be set 00:33:23.816 [2024-07-24 09:19:01.820866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.816 [2024-07-24 09:19:01.821081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.816 [2024-07-24 09:19:01.821128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.816 [2024-07-24 09:19:01.821144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 [2024-07-24 09:19:01.824422] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.816 [2024-07-24 09:19:01.833803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.816 [2024-07-24 09:19:01.834175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:23.816 [2024-07-24 09:19:01.834204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20f8b50 with addr=10.0.0.2, port=4420 00:33:23.816 [2024-07-24 09:19:01.834220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20f8b50 is same 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.816 with the state(5) to be set 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:23.816 [2024-07-24 09:19:01.834439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20f8b50 (9): Bad file descriptor 00:33:23.816 [2024-07-24 09:19:01.834662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:23.816 [2024-07-24 09:19:01.834683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:23.816 [2024-07-24 09:19:01.834697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:23.816 [2024-07-24 09:19:01.837838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.816 [2024-07-24 09:19:01.837967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.816 09:19:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3921208 00:33:23.816 [2024-07-24 09:19:01.847371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:23.816 [2024-07-24 09:19:01.926414] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:33.801 00:33:33.801 Latency(us) 00:33:33.801 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.801 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:33.801 Verification LBA range: start 0x0 length 0x4000 00:33:33.801 Nvme1n1 : 15.02 6650.02 25.98 8992.01 0.00 8158.92 776.72 18447.17 00:33:33.801 =================================================================================================================== 00:33:33.801 Total : 6650.02 25.98 8992.01 0.00 8158.92 776.72 18447.17 00:33:33.801 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:33.801 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:33.801 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:33.801 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:33.801 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:33.801 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:33.802 rmmod nvme_tcp 00:33:33.802 rmmod nvme_fabrics 00:33:33.802 rmmod nvme_keyring 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3921871 ']' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3921871 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3921871 ']' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3921871 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3921871 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3921871' 00:33:33.802 killing process with pid 3921871 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3921871 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3921871 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.802 09:19:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:35.711 00:33:35.711 real 0m22.165s 00:33:35.711 user 0m58.196s 00:33:35.711 sys 0m4.772s 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:35.711 ************************************ 00:33:35.711 END TEST nvmf_bdevperf 00:33:35.711 ************************************ 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:35.711 ************************************ 00:33:35.711 START TEST nvmf_target_disconnect 00:33:35.711 ************************************ 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:35.711 * Looking for test storage... 00:33:35.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.711 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:35.712 09:19:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:33:37.631 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:37.632 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:37.632 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:37.632 Found net devices under 0000:09:00.0: cvl_0_0 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:37.632 Found net devices under 0000:09:00.1: cvl_0_1 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:37.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:37.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:33:37.632 00:33:37.632 --- 10.0.0.2 ping statistics --- 00:33:37.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.632 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:33:37.632 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:37.892 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:37.892 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:33:37.892 00:33:37.892 --- 10.0.0.1 ping statistics --- 00:33:37.892 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:37.892 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.892 ************************************ 00:33:37.892 START TEST nvmf_target_disconnect_tc1 00:33:37.892 ************************************ 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:37.892 EAL: No free 2048 kB hugepages reported on node 1 00:33:37.892 [2024-07-24 09:19:15.883957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:37.892 [2024-07-24 09:19:15.884033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14283e0 with addr=10.0.0.2, port=4420 00:33:37.892 [2024-07-24 09:19:15.884070] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:37.892 [2024-07-24 09:19:15.884099] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:37.892 [2024-07-24 09:19:15.884129] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:33:37.892 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:33:37.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:33:37.892 Initializing NVMe Controllers 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:37.892 00:33:37.892 real 0m0.095s 00:33:37.892 user 0m0.037s 00:33:37.892 sys 0m0.058s 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:33:37.892 ************************************ 00:33:37.892 END TEST nvmf_target_disconnect_tc1 00:33:37.892 ************************************ 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:37.892 ************************************ 00:33:37.892 START TEST nvmf_target_disconnect_tc2 00:33:37.892 ************************************ 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3925018 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3925018 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3925018 ']' 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:37.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:37.892 09:19:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:37.892 [2024-07-24 09:19:15.999518] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:37.892 [2024-07-24 09:19:15.999604] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.150 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.151 [2024-07-24 09:19:16.044904] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:38.151 [2024-07-24 09:19:16.072541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:38.151 [2024-07-24 09:19:16.159746] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.151 [2024-07-24 09:19:16.159800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.151 [2024-07-24 09:19:16.159828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.151 [2024-07-24 09:19:16.159840] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.151 [2024-07-24 09:19:16.159849] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.151 [2024-07-24 09:19:16.160204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:38.151 [2024-07-24 09:19:16.160260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:38.151 [2024-07-24 09:19:16.160328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:38.151 [2024-07-24 09:19:16.160330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.408 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.408 Malloc0 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.409 [2024-07-24 09:19:16.339939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.409 [2024-07-24 09:19:16.368202] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3925040 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:33:38.409 09:19:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:33:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:33:40.313 09:19:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3925018 00:33:40.313 09:19:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Write completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 [2024-07-24 09:19:18.394054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.313 starting I/O failed 00:33:40.313 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 [2024-07-24 09:19:18.394389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 [2024-07-24 09:19:18.394705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Read completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 Write completed with error (sct=0, sc=8) 00:33:40.314 starting I/O failed 00:33:40.314 [2024-07-24 09:19:18.395009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:40.314 [2024-07-24 09:19:18.395180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.314 [2024-07-24 09:19:18.395220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.314 qpair failed and we were unable to recover it. 00:33:40.314 [2024-07-24 09:19:18.395354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.314 [2024-07-24 09:19:18.395382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.314 qpair failed and we were unable to recover it. 00:33:40.314 [2024-07-24 09:19:18.395561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.314 [2024-07-24 09:19:18.395593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.314 qpair failed and we were unable to recover it. 00:33:40.314 [2024-07-24 09:19:18.395768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.314 [2024-07-24 09:19:18.395797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.314 qpair failed and we were unable to recover it. 00:33:40.314 [2024-07-24 09:19:18.395954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.314 [2024-07-24 09:19:18.395979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.314 qpair failed and we were unable to recover it. 00:33:40.314 [2024-07-24 09:19:18.396116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.396148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.396331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.396363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.396511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.396537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.396678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.396703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.396869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.396911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.397044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.397068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.397235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.397263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.397385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.397411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.397558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.397584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.397751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.397778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.397980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.398008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.398158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.398185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.398322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.398349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.398516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.398541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.398767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.398811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.398956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.398985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.399165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.399191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.399329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.399355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.399519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.399543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.399709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.399734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.399880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.399905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.400043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.400211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.400346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.400498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.400668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.400832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.400976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.401002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.401156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.401182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.401325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.401351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.401516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.401541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.401695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.401721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.401858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.401882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.402024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.402050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.402194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.402221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.402339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.402365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.402495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.315 [2024-07-24 09:19:18.402521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.315 qpair failed and we were unable to recover it. 00:33:40.315 [2024-07-24 09:19:18.402659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.402685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.402798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.402824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.402979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.403035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.403228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.403268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.403398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.403427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.403542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.403569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.403682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.403709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.403821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.403846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.404972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.404998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.405135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.405161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.405281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.405306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.405485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.405510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.405681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.405706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.405834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.405859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.406028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.406053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.406189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.406228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.406379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.406406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.406542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.406568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.406689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.406715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.406841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.406866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.407013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.407038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.407178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.407209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.407318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.407343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.407487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.407512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.407653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.407677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.407841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.407866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.408026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.408050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.408239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.408267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.408408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.408434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.408577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.316 [2024-07-24 09:19:18.408604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.316 qpair failed and we were unable to recover it. 00:33:40.316 [2024-07-24 09:19:18.408745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.408771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.408890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.408915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.409040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.409079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.409254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.409281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.409394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.409420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.409589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.409630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.409807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.409834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.409991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.410135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.410300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.410461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.410622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.410784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.410921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.410945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.411083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.411116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.411227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.411252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.411393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.411417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.411584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.411608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.411753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.411782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.411947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.411971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.412085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.412116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.412233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.412258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.412417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.412442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.412582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.412607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.412748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.412773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.412909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.412933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.413048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.413074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.413218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.413244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.413380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.413405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.413548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.413573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.413715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.413741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.413883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.413908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.414059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.414098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.317 [2024-07-24 09:19:18.414272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.317 [2024-07-24 09:19:18.414311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.317 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.414489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.414528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.414656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.414683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.414822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.414848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.414988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.415014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.415181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.415208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.415349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.415374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.415543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.415570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.415726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.415754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.415916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.415940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.416076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.416108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.416257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.416282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.416390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.416414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.416564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.416589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.416755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.416780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.416946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.416971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.417081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.417111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.417226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.417252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.417363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.417388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.417525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.417549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.417692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.417717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.417907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.417932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.418048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.418073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.418247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.418272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.418429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.418458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.418590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.418616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.418758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.418784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.318 qpair failed and we were unable to recover it. 00:33:40.318 [2024-07-24 09:19:18.418923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.318 [2024-07-24 09:19:18.418950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.419139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.419165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.419303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.419328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.419440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.419465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.419633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.419657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.419766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.419790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.419917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.419956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.420116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.420143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.420287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.420314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.420452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.420478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.420591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.420618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.420730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.420755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.420899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.420926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.421123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.421166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.421302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.421327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.421457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.421482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.421626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.421665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.421848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.421873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.422018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.422044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.422165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.422191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.422303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.422330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.422470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.422496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.422664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.422706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.422926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.422955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.423084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.423123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.423236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.423262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.423375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.423400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.423509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.423535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.423730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.423780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.423910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.423939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.424091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.424129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.424282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.424307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.424489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.424517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.424674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.424702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.424818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.424845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.425037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.425061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.425185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.425210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.425341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.319 [2024-07-24 09:19:18.425366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.319 qpair failed and we were unable to recover it. 00:33:40.319 [2024-07-24 09:19:18.425500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.320 [2024-07-24 09:19:18.425525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.320 qpair failed and we were unable to recover it. 00:33:40.320 [2024-07-24 09:19:18.425658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.320 [2024-07-24 09:19:18.425682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.320 qpair failed and we were unable to recover it. 00:33:40.320 [2024-07-24 09:19:18.425823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.320 [2024-07-24 09:19:18.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.320 qpair failed and we were unable to recover it. 00:33:40.320 [2024-07-24 09:19:18.425989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.320 [2024-07-24 09:19:18.426014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.320 qpair failed and we were unable to recover it. 00:33:40.320 [2024-07-24 09:19:18.426196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.320 [2024-07-24 09:19:18.426222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.320 qpair failed and we were unable to recover it. 00:33:40.320 [2024-07-24 09:19:18.426356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.320 [2024-07-24 09:19:18.426381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.320 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.426520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.426545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.426676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.426701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.426818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.426843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.426982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.427007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.427115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.427141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.427307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.427332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.427459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.427484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.427648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.427673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.427813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.427839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.428069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.428097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.428240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.428265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.428383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.428408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.428574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.428599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.428754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.428795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.428922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.428949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.429069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.429094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.429259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.429284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.429446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.429471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.429625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.429650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.429814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.429841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.429998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.430026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.430185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.430211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.430375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.430415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.430609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.430634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.430787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.430830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.430996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.431020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.431187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.431213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.431378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.431403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.431542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.431567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.431682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.431706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.431848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.431873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.432054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.432094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.432223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.432251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.432374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.432400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.432508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.432534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.432707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.432732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.432871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.432902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.433045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.433071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.433232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.433258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.433402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.433427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.433565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.433590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.433730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.433755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.433892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.433918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.434922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.434948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.435095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.435131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.435284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.435323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.435496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.435523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.435706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.435735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.435910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.435936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.436079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.436113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.436253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.436278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.436412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.436437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.436615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.436643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.436770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.436797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.436951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.436979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.437144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.437181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.437320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.437347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.437523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.437549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.437668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.437694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.437835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.437860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.437986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.438014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.438161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.438194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.438336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.438361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.438504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.438539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.438689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.599 [2024-07-24 09:19:18.438715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.599 qpair failed and we were unable to recover it. 00:33:40.599 [2024-07-24 09:19:18.438837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.438862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.439005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.439031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.439198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.439225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.439387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.439431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.439593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.439636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.439751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.439783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.439922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.439948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.440059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.440084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.440288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.440331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.440493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.440536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.440683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.440753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.440919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.440944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.441062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.441087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.441205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.441230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.441368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.441393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.441530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.441555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.441696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.441721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.441865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.441891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.442006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.442032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.442181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.442207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.442348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.442374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.442537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.442563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.442701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.442726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.442857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.442883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.443023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.443049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.443212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.443238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.443346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.443371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.443510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.443535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.443677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.443702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.443811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.443836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.444002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.444028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.444167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.444192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.444369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.444407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.444538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.444566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.444684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.444710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.444884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.444909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.445071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.445096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.445282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.445307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.445453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.445478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.445650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.445675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.445782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.445807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.445945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.445969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.446113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.446139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.446276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.446301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.446443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.446468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.446596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.446638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.446830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.446857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.447013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.447038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.447177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.447204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.447365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.447389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.447688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.447741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.447888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.447912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.448114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.448154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.448279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.448307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.448448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.448475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.448616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.448643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.448855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.448881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.449015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.449040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.449154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.449181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.449341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.449379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.449511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.449555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.449701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.449726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.449861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.449887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.450024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.450049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.450188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.450214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.450388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.450414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.450580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.450605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.450817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.450875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.451009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.451038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.451205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.451230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.451367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.451395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.451561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.451586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.451707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.451738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.451880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.451906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.452083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.452115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.452254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.452279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.452414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.452440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.452568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.452593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.452783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.452826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.452944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.452969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.600 [2024-07-24 09:19:18.453111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.600 [2024-07-24 09:19:18.453137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.600 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.453295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.453337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.453486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.453529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.453669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.453696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.453834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.453860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.453972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.453997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.454115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.454141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.454301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.454345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.454476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.454518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.454638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.454663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.454820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.454845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.454977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.455003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.455144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.455170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.455313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.455340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.455501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.455527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.455662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.455687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.455789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.455814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.455968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.456006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.456163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.456202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.456357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.456384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.456547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.456573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.456701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.456730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.456911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.456939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.457095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.457126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.457268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.457295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.457423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.457448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.457588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.457615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.457761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.457805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.458000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.458026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.458135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.458160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.458275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.458300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.458480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.458508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.458680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.458708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.458871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.458896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.459055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.459079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.459234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.459260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.459413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.459438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.459606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.459631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.459794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.459819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.459958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.459983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.460125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.460168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.460328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.460355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.460535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.460582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.460762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.460786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.460928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.460953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.461129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.461171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.461345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.461385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.461527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.461552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.461659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.461685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.461829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.461854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.461999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.462024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.462198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.462225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.462389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.462414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.462664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.462712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.462872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.462897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.463033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.463058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.463242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.463281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.463433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.463461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.463636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.463663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.463913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.463966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.464140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.464179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.464324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.464351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.464493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.464519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.464670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.464699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.464886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.464912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.465026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.465053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.465216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.465255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.465406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.465433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.465544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.465570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.465712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.465738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.465879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.601 [2024-07-24 09:19:18.465905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.601 qpair failed and we were unable to recover it. 00:33:40.601 [2024-07-24 09:19:18.466020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.466045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.466212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.466251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.466422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.466449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.466616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.466642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.466803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.466828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.466941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.466966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.467109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.467135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.467271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.467296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.467453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.467480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.467656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.467684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.467811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.467836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.467945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.467970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.468115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.468155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.468324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.468351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.468484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.468512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.468667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.468696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.468859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.468888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.469039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.469067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.469226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.469252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.469390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.469415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.469573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.469601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.469750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.469778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.469923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.469950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.470072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.470100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.470289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.470314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.470462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.470559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.470701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.470746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.470919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.470945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.471056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.471082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.471230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.471274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.471428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.471472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.471603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.471651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.471814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.471857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.472023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.472048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.472187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.472214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.472381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.472407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.472574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.472598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.472733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.472758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.472875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.472902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.473920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.473945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.474088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.474119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.474285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.474328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.474463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.474488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.474627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.474670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.474806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.474832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.474945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.474970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.475114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.475140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.475279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.475304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.475441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.475467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.475581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.475607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.475750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.475776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.475917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.475942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.476081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.476113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.476235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.476279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.476409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.476437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.476641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.476693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.476803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.476828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.476968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.476992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.477124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.477151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.477316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.477344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.477517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.477560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.477704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.477728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.477867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.477892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.478030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.478055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.478192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.478236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.478394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.478437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.478602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.478644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.478781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.478806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.478950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.478976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.479086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.479117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.479225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.479252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.479386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.479413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.479584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.479624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.479764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.479789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.479949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.479974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.480108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.602 [2024-07-24 09:19:18.480134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.602 qpair failed and we were unable to recover it. 00:33:40.602 [2024-07-24 09:19:18.480295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.480345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.480479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.480510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.480632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.480661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.480781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.480809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.480932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.480961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.481154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.481180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.481317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.481347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.481496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.481525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.481692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.481717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.481858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.481885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.482028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.482053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.482192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.482218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.482388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.482417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.482551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.482578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.482717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.482743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.482907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.482936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.483129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.483155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.483290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.483315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.483477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.483506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.483630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.483658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.483799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.483827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.483987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.484015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.484151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.484178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.484309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.484335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.484518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.484546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.484701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.484730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.484910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.484938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.485098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.485135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.485295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.485320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.485464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.485490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.485634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.485679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.485829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.485857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.486010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.486038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.486175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.486202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.486342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.486368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.486587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.486615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.486793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.486822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.486971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.486999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.487162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.487189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.487328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.487353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.487499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.487529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.487662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.487688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.487798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.487824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.488006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.488035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.488180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.488206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.488355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.488382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.488568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.488718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.488746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.488909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.488934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.489100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.489132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.489250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.489276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.489414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.489439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.489599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.489628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.489805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.489844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.490020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.490050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.490225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.490252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.490370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.490396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.490529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.490557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.490710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.490738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.490888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.490917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.491092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.491138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.491285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.491312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.491443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.491469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.491658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.491701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.491840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.491882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.492049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.492074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.492245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.492270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.492430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.492472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.492618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.492660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.492798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.492823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.492963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.603 [2024-07-24 09:19:18.492989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.603 qpair failed and we were unable to recover it. 00:33:40.603 [2024-07-24 09:19:18.493177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.493216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.493361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.493387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.493527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.493552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.493663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.493688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.493798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.493822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.493927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.493952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.494063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.494088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.494222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.494250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.494424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.494451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.494578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.494606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.494758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.494786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.494937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.494964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.495118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.495146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.495311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.495354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.495477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.495520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.495672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.495715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.495854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.495879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.496020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.496045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.496179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.496210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.496344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.496389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.496537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.496564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.496716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.496777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.496957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.496985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.497124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.497166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.497315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.497342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.497465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.497492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.497668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.497696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.497822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.497849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.498011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.498055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.498199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.498227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.498382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.498412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.498662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.498715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.498865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.498893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.499051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.499077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.499197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.499224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.499369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.499394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.499582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.499614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.499826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.499878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.500031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.500059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.500220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.500246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.500457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.500514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.500630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.500658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.500791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.500833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.500958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.500986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.501117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.501159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.501295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.501320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.501459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.501485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.501651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.501679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.501858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.501886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.502038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.502066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.502261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.502287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.502411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.502438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.502573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.502598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.502733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.502761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.502912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.502940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.503092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.503126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.503260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.503284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.503419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.503444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.503627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.503655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.503801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.503828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.504002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.504041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.504191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.504219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.504390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.504435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.504641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.504692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.504907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.504956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.505118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.505162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.505349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.505392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.505605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.505655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.505783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.505826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.505935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.505961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.506159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.506185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.506339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.506367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.506587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.506646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.506809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.506835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.506945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.506970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.507081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.507112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.507255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.507281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.507449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.507474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.507708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.507760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.507884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.507923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.508072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.604 [2024-07-24 09:19:18.508099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.604 qpair failed and we were unable to recover it. 00:33:40.604 [2024-07-24 09:19:18.508242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.508284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.508445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.508474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.508600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.508629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.508828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.508872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.509036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.509063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.509220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.509246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.509383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.509425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.509601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.509627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.509769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.509794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.509960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.509990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.510098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.510132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.510245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.510272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.510389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.510415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.510554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.510579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.510741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.510766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.510873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.510898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.511899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.511926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.512093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.512125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.512308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.512337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.512504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.512530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.512668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.512694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.512837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.512862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.513022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.513060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.513213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.513240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.513429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.513457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.513705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.513756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.513912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.513940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.514074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.514099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.514247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.514271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.514409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.514451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.514654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.514712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.514855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.514899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.515011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.515037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.515186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.515235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.515421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.515464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.515621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.515664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.515806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.515831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.515993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.516019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.516178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.516224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.516417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.516445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.516665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.516720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.516825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.516851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.516988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.517131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.517269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.517437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.517597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.517741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.517912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.517939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.518059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.518085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.518279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.518322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.518452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.518495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.518634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.518659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.518800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.518825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.518934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.518959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.519124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.519151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.519287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.519330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.519498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.519523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.519663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.519688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.519803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.519829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.519968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.519994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.520108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.520133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.520288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.520333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.520502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.520548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.520720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.520745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.520867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.520892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.521004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.521029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.521155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.521184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.521326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.605 [2024-07-24 09:19:18.521370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.605 qpair failed and we were unable to recover it. 00:33:40.605 [2024-07-24 09:19:18.521536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.521564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.521723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.521748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.521914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.521939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.522043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.522068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.522208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.522251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.522400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.522442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.522550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.522575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.522739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.522765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.522904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.522929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.523067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.523222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.523392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.523553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.523713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.523879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.523995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.524150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.524332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.524516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.524682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.524842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.524970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.524995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.525130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.525189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.525320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.525351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.525509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.525539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.525694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.525720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.525882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.525907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.526076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.526120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.526281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.526313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.526441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.526471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.526633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.526661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.526843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.526894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.527058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.527083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.527234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.527259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.527461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.527513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.527670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.527699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.527866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.527925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.528085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.528117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.528261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.528288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.528461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.528490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.528665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.528693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.528843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.528876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.529022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.529050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.529197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.529223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.529357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.529382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.529521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.529562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.529739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.529802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.529946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.529974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.530167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.530193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.530298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.530324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.530457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.530482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.530614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.530641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.530775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.530817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.530969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.530997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.531183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.531209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.531352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.531377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.531498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.531540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.531668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.531697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.531914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.531942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.532069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.532096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.532273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.532298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.532455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.532483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.532661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.532689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.532823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.532851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.533013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.533041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.533174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.533200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.533361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.533404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.533566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.533591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.533755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.533800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.533926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.533954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.534116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.534142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.534298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.534326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.534474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.534503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.534658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.534683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.534859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.534902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.535076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.535248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.535390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.535528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.535697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.535861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.535977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.606 [2024-07-24 09:19:18.536002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.606 qpair failed and we were unable to recover it. 00:33:40.606 [2024-07-24 09:19:18.536146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.536172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.536308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.536333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.536498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.536526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.536686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.536711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.536877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.536919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.537072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.537099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.537246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.537272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.537380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.537405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.537544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.537572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.537733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.537758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.537890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.537914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.538028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.538054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.538220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.538246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.538385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.538410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.538547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.538572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.538713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.538738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.538889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.538917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.539061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.539089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.539262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.539288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.539404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.539445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.539621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.539649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.539803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.539828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.539967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.540009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.540141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.540171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.540315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.540340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.540482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.540523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.540687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.540712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.540830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.540855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.540989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.541015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.541207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.541236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.541410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.541434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.541590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.541615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.541730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.541755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.541875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.541899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.542079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.542113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.542277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.542302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.542441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.542466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.542621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.542648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.542796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.542824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.542964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.542989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.543107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.543134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.543298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.543323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.543459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.543484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.543596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.543636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.543804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.543829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.543972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.543997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.544158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.544198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.544356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.544384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.544551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.544576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.544688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.544713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.544846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.544873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.545034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.545063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.545215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.545241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.545424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.545452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.545586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.545615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.545723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.545748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.545899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.545927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.546087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.546117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.546270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.546298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.546455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.546483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.546610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.546635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.546765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.546790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.546933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.546961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.547119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.547149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.547253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.547278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.547480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.547505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.547621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.547646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.547786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.547810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.547955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.547982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.548133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.548178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.548330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.548355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.548490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.548514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.548686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.548711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.548863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.548891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.549017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.549045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.549176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.549202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.549312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.549337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.549468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.607 [2024-07-24 09:19:18.549492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.607 qpair failed and we were unable to recover it. 00:33:40.607 [2024-07-24 09:19:18.549623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.549648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.549791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.549816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.549966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.549991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.550153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.550183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.550322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.550347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.550498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.550522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.550673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.550698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.550828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.550870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.551054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.551079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.551275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.551315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.551436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.551462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.551616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.551659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.551811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.551840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.551965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.551999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.552149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.552176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.552291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.552317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.552451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.552476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.552622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.552648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.552801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.552828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.552945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.552971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.553121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.553147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.553278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.553303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.553435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.553463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.553611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.553638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.553815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.553842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.553974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.553999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.554161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.554186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.554315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.554343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.554469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.554497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.554640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.554668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.554795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.554846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.555012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.555037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.555155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.555182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.555341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.555366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.555500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.555528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.555701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.555728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.555904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.555932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.556116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.556143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.556287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.556313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.556510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.556564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.556737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.556765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.556918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.556946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.557113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.557138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.557274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.557299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.557449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.557478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.557654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.557682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.557832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.557859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.557978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.558007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.558169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.558194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.558334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.558375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.558530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.558558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.558707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.558735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.558865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.558893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.559068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.559097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.559255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.559282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.559421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.559464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.559616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.559644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.559895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.559948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.560090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.560146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.560291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.560317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.560459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.560485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.560641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.560672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.560791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.560818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.560995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.561023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.561192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.561218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.561338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.561363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.561489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.561518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.561647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.561675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.561819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.561847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.561978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.562006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.562170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.562196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.562349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.562375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.562535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.562563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.562714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.562741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.562863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.562891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.563062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.563107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.563226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.563253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.563422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.563467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.563695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.563742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.563877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.563919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.564059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.564085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.564231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.608 [2024-07-24 09:19:18.564260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.608 qpair failed and we were unable to recover it. 00:33:40.608 [2024-07-24 09:19:18.564407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.564435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.564564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.564591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.564740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.564768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.564949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.564974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.565136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.565163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.565314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.565339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.565531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.565559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.565689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.565716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.565932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.565959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.566113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.566159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.566325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.566351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.566636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.566687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.566844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.566871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.567032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.567060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.567263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.567291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.567458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.567486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.567656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.567696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.567880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.567907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.568054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.568082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.568250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.568275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.568423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.568450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.568602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.568630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.568757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.568785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.569023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.569050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.569209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.569235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.569383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.569411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.569567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.569594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.569752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.569779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.569928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.569955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.570085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.570116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.570232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.570256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.570416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.570441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.570575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.570602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.570764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.570789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.570964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.570992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.571181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.571206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.571318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.571343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.571472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.571513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.571640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.571668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.571817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.571844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.571976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.572004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.572184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.572210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.572322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.572347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.572531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.572563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.572724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.572752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.572866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.572893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.573069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.573097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.573271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.573296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.573455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.573482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.573635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.573663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.573839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.573866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.574047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.574074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.574246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.574271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.574387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.574412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.574591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.574618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.574741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.574768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.574960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.575016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.575173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.575201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.575336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.575380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.575563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.575617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.575761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.575803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.575967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.575993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.576181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.576211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.576389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.576417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.576604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.576632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.576755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.576783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.576946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.576971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.577084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.577115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.577234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.577259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.577368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.577393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.577527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.577558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.577777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.577805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.577980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.578007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.578168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.578193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.578329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.578354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.578509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.578537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.578652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.609 [2024-07-24 09:19:18.578681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.609 qpair failed and we were unable to recover it. 00:33:40.609 [2024-07-24 09:19:18.578840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.578868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.579020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.579048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.579206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.579232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.579335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.579361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.579496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.579521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.579665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.579692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.579871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.579899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.580084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.580118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.580297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.580322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.580456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.580484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.580622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.580650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.580800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.580828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.580964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.580991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.581131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.581157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.581271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.581297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.581455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.581483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.581606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.581635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.581762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.581791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.581943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.581971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.582130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.582156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.582292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.582322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.582487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.582515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.582656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.582683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.582799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.582827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.582976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.583005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.583171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.583197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.583301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.583327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.583478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.583507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.583661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.583689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.583804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.583832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.584008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.584036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.584174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.584200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.584338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.584364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.584553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.584581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.584764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.584793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.584968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.584996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.585135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.585161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.585662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.585693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.585863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.585892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.586031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.586056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.586195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.586222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.586400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.586428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.586588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.586613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.586770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.586797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.586914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.586943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.587081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.587115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.587284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.587309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.587426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.587456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.587660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.587702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.587870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.587897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.588053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.588082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.588275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.588300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.588466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.588494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.588625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.588653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.588876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.588904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.589056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.589085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.589253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.589278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.589438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.589466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.589621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.589663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.589790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.589819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.590035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.590063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.590243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.590269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.590442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.590470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.590654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.590704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.590850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.590878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.591055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.591083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.591229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.591255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.591407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.591436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.591614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.591642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.591836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.591861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.592061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.592090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.592235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.592261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.592401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.592426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.592563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.592588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.592779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.592806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.592951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.592994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.593169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.593195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.593313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.593338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.593475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.593500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.593608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.593649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.593824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.593852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.593977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.594001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.594147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.594174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.594293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.594318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.594453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.594478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.610 [2024-07-24 09:19:18.594634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.610 [2024-07-24 09:19:18.594662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.610 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.594822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.594850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.595951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.595976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.596091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.596124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.596268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.596293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.596403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.596428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.596553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.596578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.596729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.596757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.596915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.596940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.597056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.597081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.597247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.597275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.597431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.597456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.597637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.597803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.597828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.597968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.597994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.598163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.598189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.598342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.598368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.598561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.598586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.598745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.598774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.598922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.598950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.599125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.599154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.599289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.599315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.599481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.599509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.599638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.599663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.599784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.599813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.599931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.599956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.600093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.600125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.600264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.600289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.600443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.600470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.600665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.600691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.600845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.600870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.601036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.601173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.601317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.601487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.601649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.601835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.601975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.602189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.602350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.602513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.602661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.602828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.602961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.602986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.603181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.603207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.603339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.603365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.603518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.603545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.603682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.603707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.603827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.603852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.604016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.604041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.604171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.604197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.604303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.604332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.604473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.604498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.604718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.604743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.604882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.604907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.605059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.605084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.605234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.605259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.605392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.605417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.605561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.605585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.605739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.605764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.605899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.605924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.606090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.606126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.606264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.606289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.606406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.606431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.606545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.606569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.606710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.606737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.606871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.606897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.611 [2024-07-24 09:19:18.607019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.611 [2024-07-24 09:19:18.607044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.611 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.607194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.607220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.607333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.607358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.607525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.607552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.607689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.607714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.607852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.607877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.608035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.608063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.608219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.608244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.608363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.608389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.608549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.608574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.608712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.608738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.608892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.608921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.609111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.609140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.609300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.609325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.609461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.609486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.609624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.609649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.609752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.609776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.609911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.609936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.610074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.610099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.610261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.610286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.610396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.610421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.610579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.610603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.610765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.610789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.610929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.610970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.611130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.611159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.611324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.611349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.611488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.611514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.611671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.611696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.611837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.611862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.611980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.612021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.612161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.612186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.612302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.612327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.612461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.612488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.612672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.612701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.612834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.612862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.613039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.613067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.613200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.613225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.613343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.613368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.613533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.613558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.613725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.613780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.613940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.613965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.614074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.614099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.614220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.614246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.614383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.614408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.614573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.614599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.614771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.614799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.614928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.614955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.615077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.615146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.615283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.615307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.615421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.615446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.615609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.615635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.615832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.615857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.616014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.616042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.616216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.616245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.616378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.616405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.616540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.616568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.616755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.616780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.616913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.616941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.617093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.617128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.617289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.617313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.617451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.617476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.617647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.617675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.617824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.617852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.618027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.618055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.618223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.618248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.618365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.618390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.618559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.618584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.618807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.618851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.619953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.619978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.620148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.620174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.620349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.620374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.620503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.612 [2024-07-24 09:19:18.620530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.612 qpair failed and we were unable to recover it. 00:33:40.612 [2024-07-24 09:19:18.620678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.620706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.620861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.620893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.621050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.621075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.621228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.621254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.621394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.621433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.621636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.621682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.621845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.621870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.622038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.622226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.622390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.622549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.622711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.622849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.622986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.623193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.623338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.623502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.623656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.623789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.623955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.623980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.624115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.624141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.624247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.624271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.624410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.624436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.624592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.624620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.624765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.624790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.624928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.624953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.625099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.625138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.625306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.625331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.625497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.625528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.625671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.625696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.625869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.625894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.626074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.626109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.626257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.626285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.626437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.626465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.626647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.626672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.626824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.626852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.626997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.627024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.627166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.627192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.627315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.627340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.627453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.627478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.627671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.627696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.627803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.627830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.627994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.628019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.628169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.628198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.628376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.628404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.628536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.628565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.628751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.628776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.628916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.628942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.629060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.629085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.629256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.629281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.629448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.629473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.629625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.629653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.629825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.629853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.630010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.630035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.630195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.630221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.630356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.630384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.630564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.630592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.630783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.630811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.630962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.630987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.631178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.631207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.631327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.631355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.631516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.631544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.631711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.631737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.631889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.631916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.632080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.632116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.632270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.632298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.632454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.632479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.632616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.632658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.632774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.632801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.632976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.633007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.633166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.633192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.633374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.633402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.633582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.633610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.633807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.633858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.634045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.634070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.634183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.634210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.634330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.634355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.634519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.634547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.634696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.634721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.634854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.613 [2024-07-24 09:19:18.634880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.613 qpair failed and we were unable to recover it. 00:33:40.613 [2024-07-24 09:19:18.635086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.635117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.635262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.635287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.635446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.635472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.635635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.635663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.635815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.635844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.636019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.636047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.636216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.636242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.636373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.636413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.636562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.636590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.636738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.636767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.636951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.636977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.637137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.637165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.637322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.637350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.637522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.637568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.637729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.637754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.637889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.637914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.638062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.638113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.638267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.638295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.638429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.638454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.638593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.638633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.638811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.638839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.639003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.639028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.639183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.639208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.639388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.639415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.639566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.639595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.639727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.639755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.639920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.639945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.640076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.640125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.640256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.640284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.640440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.640468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.640634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.640659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.640774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.640815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.640935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.640963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.641082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.641117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.641260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.641285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.641431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.641456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.641641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.641668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.641853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.641878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.642014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.642040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.642173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.642199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.642364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.642408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.642559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.642587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.642716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.642741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.642878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.642907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.643070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.643097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.643250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.643275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.643438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.643463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.643593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.643621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.643741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.643769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.643911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.643939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.644075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.644100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.644271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.644296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.644420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.644447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.644595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.644623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.644753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.644778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.644913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.644937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.645076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.645123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.645249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.645290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.645429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.645454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.645572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.645597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.645787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.645815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.645940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.645970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.646113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.646140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.646312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.646355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.646501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.646528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.646655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.646684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.646868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.646893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.647009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.647035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.647168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.647194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.647331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.647359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.647545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.614 [2024-07-24 09:19:18.647570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.614 qpair failed and we were unable to recover it. 00:33:40.614 [2024-07-24 09:19:18.647735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.647763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.647877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.647905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.648059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.648089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.648232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.648259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.648395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.648420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.648563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.648591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.648744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.648772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.648921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.648946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.649080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.649129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.649281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.649309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.649479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.649530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.649684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.649709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.649885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.649913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.650034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.650062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.650216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.650245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.650427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.650452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.650607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.650634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.650759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.650786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.650933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.650961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.651124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.651150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.651266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.651291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.651450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.651478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.651665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.651710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.651861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.651886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.651997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.652022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.652158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.652185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.652347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.652374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.652533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.652558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.652693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.652735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.652865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.652893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.653964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.653989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.654123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.654151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.654309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.654337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.654494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.654519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.654656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.654703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.654854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.654882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.655891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.655916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.656081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.656112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.656256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.656284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.656442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.656467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.656602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.656645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.656770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.656797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.656996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.657022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.657161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.657187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.657302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.657328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.657495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.657536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.657701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.657750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.657934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.657959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.658068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.658094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.658274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.658302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.658469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.658502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.658650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.658675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.658813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.658838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.659039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.659064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.659192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.659218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.659359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.659388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.659539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.659567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.659742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.659770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.659921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.659950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.660113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.660143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.660300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.660325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.660440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.660464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.660594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.660622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.660752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.660776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.660921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.660946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.661082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.661125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.661293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.661321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.661454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.661479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.661626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.661650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.661772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.661798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.661936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.661961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.615 qpair failed and we were unable to recover it. 00:33:40.615 [2024-07-24 09:19:18.662114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.615 [2024-07-24 09:19:18.662140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.662260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.662285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.662421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.662445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.662586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.662614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.662744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.662769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.662899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.662923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.663060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.663088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.663222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.663250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.663382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.663407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.663574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.663599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.663759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.663787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.663967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.663999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.664187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.664213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.664328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.664353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.664470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.664495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.664633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.664660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.664840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.664865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.665047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.665075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.665229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.665255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.665394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.665421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.665590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.665615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.665755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.665797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.665944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.665972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.666113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.666142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.666277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.666302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.666471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.666512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.666663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.666690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.666843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.666871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.667050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.667078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.667244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.667269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.667380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.667405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.667544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.667568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.667706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.667731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.667888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.667915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.668074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.668099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.668245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.668269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.668408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.668433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.668582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.668610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.668761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.668789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.668944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.668972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.669160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.669186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.669381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.669408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.669536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.669563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.669716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.669743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.669874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.669899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.670043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.670068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.670251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.670276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.670383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.670408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.670543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.670569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.670684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.670709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.670841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.670866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.671006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.671031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.671196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.671226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.671367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.671410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.671569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.671595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.671713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.671738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.671874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.671899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.672012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.672053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.672224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.672251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.672370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.672395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.672540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.672565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.672700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.672725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.672898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.672927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.673112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.673141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.673269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.673295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.673459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.673502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.673654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.673683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.673810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.673839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.674005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.674031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.674212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.674241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.674389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.674417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.674595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.674623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.674757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.674782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.674884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.674909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.675113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.675138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.675277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.616 [2024-07-24 09:19:18.675303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.616 qpair failed and we were unable to recover it. 00:33:40.616 [2024-07-24 09:19:18.675487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.675512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.675698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.675726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.675869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.675896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.676061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.676090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.676210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.676234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.676401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.676443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.676617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.676645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.676791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.676818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.676970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.676995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.677160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.677185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.677325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.677353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.677507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.677535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.677688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.677713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.677834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.677874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.677995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.678023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.678167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.678196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.678356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.678381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.678539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.678567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.678704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.678729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.678868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.678894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.679027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.679053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.679190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.679216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.679355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.679380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.679560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.679588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.679747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.679773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.679955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.679984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.680153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.680179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.680342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.680367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.680548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.680573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.680724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.680751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.680879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.680911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.681970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.681995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.682109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.682134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.682266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.682291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.682441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.682468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.682606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.682631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.682772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.682800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.682978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.683006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.683172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.683198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.683335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.683360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.683478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.683503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.683689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.683716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.683851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.683876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.683997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.684022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.684168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.684193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.684330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.684358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.684490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.684515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.684654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.684679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.684849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.684877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.685049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.685077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.685237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.685263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.685380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.685406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.685522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.685547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.685716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.685742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.685874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.685899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.686036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.686080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.686252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.686278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.686442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.686467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.686576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.686601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.686708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.686735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.686847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.686872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.687053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.687081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.687256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.687281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.687417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.687442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.687586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.687611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.687805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.687833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.687964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.687989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.688152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.688193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.688353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.688378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.688493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.688518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.688628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.688654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.688793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.688818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.688955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.688983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.689114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.689142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.617 [2024-07-24 09:19:18.689271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.617 [2024-07-24 09:19:18.689297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.617 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.689456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.689498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.689629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.689657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.689808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.689836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.689973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.689998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.690125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.690151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.690264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.690289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.690403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.690428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.690603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.690628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.690771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.690795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.690980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.691008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.691165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.691194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.691331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.691356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.691496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.691521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.691681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.691724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.691843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.691871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.692089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.692125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.692258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.692284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.692408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.692437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.692551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.692576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.692682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.692707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.692820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.692845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.693030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.693055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.693174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.693200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.693317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.693343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.693486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.693511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.693672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.693697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.618 [2024-07-24 09:19:18.693834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.618 [2024-07-24 09:19:18.693860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.618 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.693996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.694132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.694272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.694410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.694578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.694714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.694879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.694904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.695958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.695984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.696122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.696268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.696399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.696541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.696679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.696841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.696977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.697121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.697313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.697464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.697656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.697785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.697919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.697944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.698091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.698127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.698255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.698280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.698412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.698437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.698602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.698631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.698755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.698784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.899 [2024-07-24 09:19:18.698915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.899 [2024-07-24 09:19:18.698940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.899 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.699054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.699079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.699246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.699276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.699401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.699429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.699587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.699612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.699726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.699751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.699917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.699942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.700084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.700116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.700224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.700249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.700361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.700386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.700543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.700570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.700695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.700723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.700879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.700907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.701039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.701064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.701191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.701217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.701362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.701387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.701571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.701595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.701710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.701735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.701869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.701896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.702016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.702043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.702188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.702215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.702348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.702373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.702514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.702542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.702715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.702743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.702902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.702927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.703039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.703064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.703182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.703208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.703370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.703395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.703553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.703578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.703719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.703745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.703881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.703906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.704067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.704095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.704256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.704283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.704430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.704458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.704579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.704607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.704731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.704759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.704894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.704919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.705031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.705056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.900 qpair failed and we were unable to recover it. 00:33:40.900 [2024-07-24 09:19:18.705217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.900 [2024-07-24 09:19:18.705245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.705371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.705399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.705544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.705569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.705704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.705729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.705892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.705921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.706080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.706113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.706279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.706304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.706458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.706486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.706637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.706664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.706874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.706938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.707064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.707089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.707237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.707262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.707429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.707457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.707618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.707643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.707795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.707819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.707925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.707971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.708122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.708151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.708305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.708333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.708492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.708517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.708633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.708660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.708829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.708857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.708983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.709139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.709279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.709421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.709569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.709748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.709878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.709902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.710071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.710098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.710285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.710311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.710420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.710445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.710609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.710651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.710796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.710824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.710945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.710973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.711109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.711134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.711275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.711300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.711436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.711478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.711651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.711693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.901 qpair failed and we were unable to recover it. 00:33:40.901 [2024-07-24 09:19:18.711832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.901 [2024-07-24 09:19:18.711857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.711991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.712016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.712213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.712239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.712413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.712441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.712573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.712603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.712765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.712791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.712930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.712955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.713115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.713143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.713280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.713305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.713467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.713508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.713654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.713681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.713804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.713831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.713985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.714010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.714156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.714181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.714289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.714316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.714450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.714478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.714630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.714655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.714792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.714816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.715955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.715979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.716117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.716161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.716339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.716367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.716552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.716577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.716708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.716733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.716873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.716898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.717029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.717057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.717240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.717273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.717409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.717434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.717604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.717644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.717767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.717795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.717972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.717997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.718113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.718139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.718302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.902 [2024-07-24 09:19:18.718327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.902 qpair failed and we were unable to recover it. 00:33:40.902 [2024-07-24 09:19:18.718486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.718514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.718662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.718690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.718871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.718896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.719053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.719082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.719218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.719246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.719394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.719419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.719531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.719556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.719695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.719720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.719862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.719887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.720057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.720082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.720230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.720255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.720369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.720394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.720502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.720527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.720680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.720708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.720846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.720872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.721010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.721035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.721201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.721229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.721378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.721407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.721530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.721555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.721673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.721699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.721882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.721910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.722070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.722097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.722242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.722267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.722382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.722407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.722550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.722575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.722745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.722885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.722910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.723050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.723091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.723255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.723283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.723423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.723448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.723582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.903 [2024-07-24 09:19:18.723607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.903 qpair failed and we were unable to recover it. 00:33:40.903 [2024-07-24 09:19:18.723721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.723746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.723908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.723936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.724084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.724121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.724264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.724289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.724478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.724506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.724686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.724711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.724853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.724877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.725071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.725099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.725248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.725273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.725410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.725435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.725577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.725602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.725735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.725760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.725901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.725945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.726096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.726144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.726277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.726304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.726489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.726514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.726650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.726675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.726810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.726837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.727015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.727042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.727175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.727201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.727314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.727339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.727452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.727477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.727663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.727690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.727879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.727904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.728909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.728941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.729076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.729107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.729250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.729276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.729389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.729415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.729576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.729604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.729756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.729781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.729918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.904 [2024-07-24 09:19:18.729943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.904 qpair failed and we were unable to recover it. 00:33:40.904 [2024-07-24 09:19:18.730124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.730149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.730285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.730310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.730426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.730451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.730559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.730584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.730698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.730722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.730853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.730878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.731036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.731064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.731231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.731256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.731390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.731415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.731578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.731606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.731765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.731790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.731909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.731935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.732133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.732271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.732402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.732530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.732689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.732842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.732976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.733000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.733143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.733169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.733302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.733334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.733479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.733505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.733653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.733678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.733820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.733862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.734020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.734049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.734168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.734196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.734360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.734385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.734521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.734564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.734717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.734742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.734850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.734875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.735038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.735063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.735197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.735226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.735356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.735383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.735552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.735577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.735695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.735720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.735830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.735855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.736016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.736043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.736188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.905 [2024-07-24 09:19:18.736217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.905 qpair failed and we were unable to recover it. 00:33:40.905 [2024-07-24 09:19:18.736385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.736410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.736523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.736548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.736661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.736686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.736790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.736815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.736946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.736973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.737141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.737166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.737306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.737332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.737479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.737504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.737642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.737666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.737801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.737844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.737973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.738002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.738125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.738168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.738303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.738328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.738489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.738516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.738639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.738666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.738819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.738844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.738987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.739011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.739166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.739194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.739344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.739372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.739562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.739587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.739698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.739723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.739864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.739906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.740089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.740121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.740306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.740334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.740495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.740520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.740656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.740681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.740889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.740914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.741049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.741073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.741243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.741268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.741435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.741460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.741602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.741643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.741769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.741797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.741952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.741977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.742089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.742127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.742262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.742287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.742451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.742493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.742611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.742636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.906 [2024-07-24 09:19:18.742754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.906 [2024-07-24 09:19:18.742779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.906 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.742947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.742987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.743174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.743200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.743363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.743388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.743552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.743580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.743708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.743735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.743920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.743946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.744062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.744088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.744236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.744278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.744432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.744457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.744620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.744661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.744826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.744851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.744987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.745029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.745173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.745202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.745339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.745366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.745539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.745564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.745669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.745694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.745834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.745861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.745976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.746004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.746161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.746187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.746316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.746358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.746504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.746532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.746683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.746711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.746850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.746875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.747038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.747063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.747184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.747210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.747351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.747381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.747513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.747538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.747643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.747669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.747829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.747855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.748059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.748084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.748233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.748258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.748392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.748417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.748552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.748597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.748799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.748846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.749059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.749087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.749223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.749248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.749361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.749405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.907 [2024-07-24 09:19:18.749570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.907 [2024-07-24 09:19:18.749598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.907 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.749738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.749762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.749922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.749951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.750151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.750180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.750312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.750340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.750490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.750514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.750622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.750647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.750786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.750814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.750967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.750995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.751131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.751157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.751310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.751335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.751520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.751547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.751685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.751720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.751865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.751890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.752026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.752051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.752229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.752256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.752398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.752423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.752561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.752586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.752693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.752718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.752903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.752931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.753082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.753116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.753253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.753278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.753439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.753464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.753577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.753602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.753784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.753811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.753973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.753997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.754178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.754208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.754328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.754356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.754536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.754561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.754675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.754703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.754842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.754867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.755014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.755041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.755207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.755235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.908 [2024-07-24 09:19:18.755409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.908 [2024-07-24 09:19:18.755434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.908 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.755593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.755620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.755749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.755777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.755903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.755930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.756060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.756112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.756247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.756273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.756380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.756404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.756576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.756618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.756783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.756808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.756966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.757007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.757188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.757216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.757364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.757392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.757551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.757576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.757732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.757760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.757940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.757964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.758072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.758097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.758238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.758263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.758397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.758437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.758596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.758623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.758773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.758800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.758942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.758966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.759114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.759140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.759325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.759353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.759484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.759511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.759741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.759765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.759954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.759982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.760132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.760160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.760286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.760316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.760475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.760500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.760620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.760661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.760806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.760834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.760967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.760995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.761160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.761185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.761326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.761351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.761513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.761540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.761715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.761742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.761875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.761900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.762044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.909 [2024-07-24 09:19:18.762070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.909 qpair failed and we were unable to recover it. 00:33:40.909 [2024-07-24 09:19:18.762241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.762269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.762438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.762463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.762604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.762628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.762768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.762793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.762927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.762951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.763094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.763125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.763303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.763328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.763438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.763479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.763610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.763640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.763767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.763795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.763929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.763971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.764158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.764184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.764344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.764369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.764538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.764565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.764728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.764753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.764872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.764897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.765059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.765084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.765250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.765275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.765493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.765517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.765673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.765700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.765856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.765881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.766044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.766086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.766330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.766355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.766538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.766565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.766690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.766717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.766840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.766869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.767024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.767055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.767198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.767224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.767363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.767388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.767541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.767578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.767752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.767777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.767885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.767912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.768090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.768127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.768272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.768299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.768435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.768460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.768598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.768639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.768755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.768784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.768904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.910 [2024-07-24 09:19:18.768932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.910 qpair failed and we were unable to recover it. 00:33:40.910 [2024-07-24 09:19:18.769090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.769122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.769236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.769280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.769443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.769470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.769602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.769630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.769770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.769796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.769933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.769958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.770097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.770134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.770266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.770294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.770454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.770479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.770615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.770640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.770840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.770865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.770976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.771001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.771139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.771165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.771307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.771332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.771498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.771526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.771680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.771712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.771849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.771874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.772027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.772052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.772239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.772264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.772373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.772399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.772559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.772584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.772739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.772767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.772927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.772952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.773086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.773117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.773292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.773316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.773497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.773525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.773650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.773677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.911 [2024-07-24 09:19:18.773802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.911 [2024-07-24 09:19:18.773829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.911 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.773991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.774017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.774129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.774156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.774317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.774358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.774514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.774563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.774725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.774750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.774913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.774954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.775073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.775109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.775250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.775275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.775409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.775434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.775547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.775587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.775709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.775736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.775882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.775909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.776953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.776978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.777143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.777188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.777331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.777356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.777504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.777532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.777692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.777717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.777853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.777878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.778959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.778984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.779127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.779153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.779312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.779337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.779469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.779494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.779651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.779676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.779812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.912 [2024-07-24 09:19:18.779837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.912 qpair failed and we were unable to recover it. 00:33:40.912 [2024-07-24 09:19:18.779977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.780002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.780143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.780169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.780334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.780510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.780538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.780700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.780726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.780847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.780872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.781047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.781074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.781239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.781268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.781432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.781456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.781588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.781631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.781782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.781809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.781954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.781981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.782119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.782145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.782258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.782283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.782415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.782440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.782571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.782596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.782769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.782794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.782977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.783004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.783150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.783176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.783318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.783346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.783507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.783532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.783690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.783718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.783896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.783923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.784081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.784112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.784253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.784278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.784416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.784459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.784586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.784614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.784779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.784807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.784965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.784990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.785131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.785157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.785323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.785366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.785541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.913 [2024-07-24 09:19:18.785589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.913 qpair failed and we were unable to recover it. 00:33:40.913 [2024-07-24 09:19:18.785745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.785771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.785895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.785920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.786035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.786060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.786228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.786257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.786397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.786422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.786585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.786630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.786777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.786805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.786982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.787139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.787301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.787490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.787641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.787827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.787966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.787992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.788126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.788156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.788271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.788296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.788450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.788474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.788617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.788641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.788785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.788810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.788973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.788997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.789154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.789195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.789346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.789373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.789513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.789539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.789669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.789715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.789858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.789901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.790040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.790065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.790206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.790251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.790406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.790450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.790667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.790716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.790881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.790906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.791037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.791062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.791208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.791233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.791395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.791423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.791543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.791570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.791721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.791750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.791885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.791913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.792043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.792068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.914 [2024-07-24 09:19:18.792211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.914 [2024-07-24 09:19:18.792236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.914 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.792353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.792378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.792494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.792519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.792648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.792676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.792824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.792856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.792970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.792998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.793137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.793163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.793305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.793330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.793480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.793508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.793649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.793677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.793800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.793828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.794927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.794955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.795114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.795157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.795298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.795323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.795501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.795528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.795700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.795728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.795856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.795884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.796035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.796060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.796180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.796205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.796340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.796365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.796522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.796550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.796673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.796716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.796842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.796869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.797012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.797039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.797188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.797214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.797353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.797382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.797541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.797569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.797695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.797723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.797847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.797875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.798052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.798077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.798205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.798231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.798348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.798372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.915 qpair failed and we were unable to recover it. 00:33:40.915 [2024-07-24 09:19:18.798515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.915 [2024-07-24 09:19:18.798541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.798729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.798757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.798909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.798936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.799114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.799158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.799323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.799348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.799505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.799530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.799666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.799708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.799859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.799887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.800057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.800082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.800230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.800256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.800438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.800465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.800586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.800615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.800791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.800819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.801059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.801087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.801256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.801281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.801416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.801442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.801574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.801602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.801757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.801785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.801898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.801925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.802098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.802160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.802280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.802305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.802450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.802475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.802577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.802602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.802765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.802790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.802952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.802980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.803139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.803165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.803304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.803329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.803521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.803548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.803701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.803728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.803900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.803924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.804087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.804135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.804284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.804312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.804487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.804515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.804697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.804722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.804843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.804868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.804972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.804997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.805135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.805160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.805315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.916 [2024-07-24 09:19:18.805339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.916 qpair failed and we were unable to recover it. 00:33:40.916 [2024-07-24 09:19:18.805497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.805524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.805714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.805739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.805850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.805876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.806016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.806041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.806148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.806189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.806318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.806345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.806517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.806545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.806698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.806723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.806876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.806900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.807052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.807094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.807263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.807291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.807460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.807485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.807615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.807656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.807803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.807830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.807998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.808023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.808165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.808191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.808333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.808357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.808520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.808547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.808715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.808740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.808897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.808922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.809063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.809087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.809232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.809273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.809431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.809456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.809618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.809646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.809777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.809806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.809970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.809995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.810126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.810152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.810293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.810319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.810478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.810505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.810651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.810679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.810850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.810878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.811035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.811061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.811210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.811235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.811373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.811417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.811569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.811597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.811781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.811806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.811917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.811959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.917 [2024-07-24 09:19:18.812091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.917 [2024-07-24 09:19:18.812127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.917 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.812280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.812308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.812465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.812490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.812597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.812622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.812754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.812779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.812904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.812931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.813117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.813142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.813301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.813329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.813481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.813509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.813657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.813686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.813845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.813870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.814034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.814062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.814205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.814233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.814398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.814427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.814591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.814616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.814746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.814771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.814907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.814932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.815073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.815108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.815272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.815297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.815487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.815515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.815664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.815690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.815835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.815861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.816037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.816061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.816198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.816224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.816363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.816388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.816530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.816559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.816723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.816749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.816916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.918 [2024-07-24 09:19:18.816941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.918 qpair failed and we were unable to recover it. 00:33:40.918 [2024-07-24 09:19:18.817116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.817145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.817291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.817319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.817501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.817525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.817665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.817690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.817803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.817828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.817962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.817990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.818152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.818179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.818317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.818358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.818586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.818614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.818756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.818784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.818929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.818954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.819178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.819206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.819381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.819406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.819550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.819576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.819707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.819732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.819874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.819918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.820062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.820089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.820282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.820307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.820448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.820473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.820624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.820652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.820880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.820908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.821095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.821126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.821264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.821289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.821443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.821471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.821655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.821682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.821845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.821873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.822057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.822082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.822210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.822236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.919 [2024-07-24 09:19:18.822376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.919 [2024-07-24 09:19:18.822401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.919 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.822539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.822564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.822768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.822793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.822930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.822955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.823089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.823128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.823292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.823320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.823506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.823531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.823652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.823676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.823813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.823838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.823973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.824013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.824152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.824178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.824399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.824427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.824579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.824605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.824767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.824792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.824992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.825017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.825171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.825200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.825328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.825355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.825528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.825556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.825735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.825759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.825913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.825940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.826100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.826134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.826280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.826305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.826436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.826461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.826601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.826626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.826765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.826790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.826908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.826937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.827138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.827187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.827352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.827377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.827520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.827548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.920 [2024-07-24 09:19:18.827709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.920 [2024-07-24 09:19:18.827733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.920 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.827863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.827888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.827995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.828020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.828187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.828216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.828374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.828399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.828527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.828552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.828684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.828709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.828869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.828894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.829096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.829136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.829277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.829303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.829472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.829497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.829691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.829719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.829882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.829907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.830069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.830094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.830258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.830286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.830426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.830451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.830611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.830636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.830815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.830841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.830992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.831021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.831195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.831224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.831382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.831408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.831569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.831594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.831750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.831778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.831932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.831964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.832118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.832146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.832328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.832353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.832542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.832569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.832758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.832783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.832920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.832945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.833108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.833150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.921 qpair failed and we were unable to recover it. 00:33:40.921 [2024-07-24 09:19:18.833311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.921 [2024-07-24 09:19:18.833336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.833490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.833530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.833679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.833708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.833870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.833895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.834030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.834073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.834243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.834269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.834392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.834417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.834584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.834610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.834791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.834818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.834981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.835009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.835163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.835191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.835355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.835381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.835536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.835578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.835702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.835730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.835863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.835891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.836061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.836086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.836228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.836253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.836417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.836442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.836621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.836648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.836807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.836833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.836996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.837025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.837246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.837272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.837414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.837439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.837554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.837579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.837703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.837729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.837889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.837917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.838124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.838150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.838291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.838316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.838425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.838451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.838618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.838659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.838811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.838839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.838968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.838993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.839110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.839136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.839290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.839316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.922 [2024-07-24 09:19:18.839462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.922 [2024-07-24 09:19:18.839490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.922 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.839673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.839698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.839889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.839917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.840056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.840082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.840307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.840332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.840469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.840494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.840648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.840675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.840826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.840855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.841076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.841109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.841239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.841264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.841408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.841433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.841603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.841630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.841870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.841895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.842038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.842063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.842186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.842213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.842446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.842471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.842663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.842691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.842848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.842874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.842991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.843017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.843157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.843183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.843416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.843441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.843616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.843641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.843823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.843851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.844012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.844037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.844157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.844183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.844311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.844336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.844478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.844503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.844672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.844712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.923 qpair failed and we were unable to recover it. 00:33:40.923 [2024-07-24 09:19:18.844835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.923 [2024-07-24 09:19:18.844863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.845036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.845064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.845208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.845235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.845372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.845398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.845559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.845589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.845779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.845805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.845946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.845972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.846198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.846226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.846387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.846425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.846603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.846628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.846770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.846795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.846933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.846967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.847141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.847167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.847316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.847341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.847489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.847532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.847739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.847789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.847905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.847933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.848092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.848126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.848261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.848286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.848450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.848478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.848599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.848627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.848792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.848817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.848970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.848995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.849112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.849137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.849251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.849276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.849390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.849414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.849528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.849553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.849723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.849748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.849885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.849911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.850051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.850077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.850193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.924 [2024-07-24 09:19:18.850220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.924 qpair failed and we were unable to recover it. 00:33:40.924 [2024-07-24 09:19:18.850352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.850394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.850514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.850542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.850681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.850706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.850841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.850866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.851027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.851052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.851183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.851209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.851347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.851372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.851521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.851546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.851667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.851702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.851876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.851905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.852065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.852091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.852251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.852276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.852392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.852423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.852555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.852580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.852694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.852720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.852855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.852880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.853077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.853120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.853239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.853265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.853427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.853452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.853618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.853645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.853793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.853820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.853955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.853981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.854098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.854135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.854276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.854301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.854457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.854485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.854635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.854664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.854816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.854842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.854981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.855006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.855163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.855189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.855319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.855347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.855508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.855533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.925 [2024-07-24 09:19:18.855663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.925 [2024-07-24 09:19:18.855688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.925 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.855832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.855859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.856061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.856222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.856363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.856543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.856722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.856884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.856999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.857024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.857155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.857181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.857329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.857358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.857504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.857529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.857662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.857687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.857869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.857897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.858042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.858068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.858238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.858263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.858418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.858446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.858597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.858625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.858780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.858812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.858970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.858996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.859148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.859174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.859294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.859319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.859425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.859450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.859608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.859633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.859787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.859815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.859974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.859999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.860137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.860163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.860278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.860303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.860417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.860442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.860613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.860640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.860759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.860787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.860948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.860973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.861112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.861138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.861303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.861329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.861445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.861470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.861602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.861627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.861780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.861820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.862002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.862027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.862179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.862205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.862372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.862409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.862563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.862592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.862706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.862734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.862884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.862912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.926 [2024-07-24 09:19:18.863045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.926 [2024-07-24 09:19:18.863069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.926 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.863204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.863230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.863338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.863363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.863532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.863558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.863697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.863722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.863888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.863913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.864073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.864099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.864221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.864246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.864386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.864411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.864547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.864572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.864712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.864736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.864876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.864901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.865061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.865085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.865231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.865256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.865375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.865416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.865589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.865627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.865747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.865774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.865914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.865958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.866113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.866160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.866281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.866307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.866425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.866452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.866589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.866615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.866753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.866782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.866974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.867000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.867167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.867193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.867339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.867380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.867569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.867620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.867782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.867807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.867946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.867973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.868107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.868138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.868243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.868268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.868411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.868437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.868555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.868581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.868693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.868719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.868872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.868898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.869038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.869082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.869230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.869255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.869395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.869421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.869618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.869645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.869805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.869846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.869999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.870024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.870167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.870193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.870357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.870393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.870569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.870597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.927 qpair failed and we were unable to recover it. 00:33:40.927 [2024-07-24 09:19:18.870776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.927 [2024-07-24 09:19:18.870801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.870954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.870982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.871122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.871148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.871282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.871307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.871414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.871439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.871554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.871581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.871746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.871771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.871939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.871967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.872117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.872142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.872277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.872302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.872439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.872465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.872603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.872628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.872794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.872824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.872979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.873004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.873146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.873171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.873336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.873361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.873539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.873564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.873701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.873728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.873847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.873872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.874899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.874924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.875965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.875998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.876145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.876171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.876308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.876332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.876468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.876493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.876604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.876630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.876768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.876797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.876930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.928 [2024-07-24 09:19:18.876954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.928 qpair failed and we were unable to recover it. 00:33:40.928 [2024-07-24 09:19:18.877109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.877139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.877250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.877275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.877413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.877438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.877556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.877588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.877698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.877723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.877835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.877861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.878037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.878203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.878368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.878536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.878681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.878858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.878986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.879011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.879152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.879178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.879317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.879357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.879554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.879589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.879794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.879823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.879984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.880009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.880202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.880233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.880375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.880404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.880544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.880569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.880698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.880723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.880890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.880915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.881075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.881110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.881263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.881288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.881438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.881463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.881625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.881651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.881897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.881928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.882110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.882154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.882292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.882317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.882469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.882511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.882699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.882752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.882905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.882944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.883109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.883134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.883269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.883295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.883449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.883478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.883696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.883749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.883875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.883901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.884047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.884073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.884215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.884240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.884348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.884373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.884484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.884510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.884621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.884659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.929 [2024-07-24 09:19:18.884789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.929 [2024-07-24 09:19:18.884817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.929 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.884986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.885010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.885150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.885176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.885285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.885310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.885450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.885479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.885669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.885694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.885837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.885861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.886053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.886080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.886222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.886248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.886386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.886412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.886531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.886557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.886700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.886730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.886870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.886895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.887939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.887965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.888072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.888112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.888256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.888280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.888419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.888445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.889324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.889353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.889526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.889563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.889720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.889750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.889886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.889913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.890054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.890079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.890250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.890275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.890385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.890418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.890573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.890599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.890726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.890751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.890895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.890920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.891064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.891098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.891225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.891250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.891364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.891391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.891547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.891573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.891715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.891741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.891876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.891902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.892038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.892063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.892204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.892230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.892368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.892393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.892594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.892620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.892785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.892813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.893638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.893671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.930 [2024-07-24 09:19:18.893863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.930 [2024-07-24 09:19:18.893892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.930 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.894038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.894065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.894238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.894264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.894385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.894416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.894536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.894560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.894691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.894717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.894913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.894942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.895071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.895097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.895223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.895249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.895418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.895443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.895579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.895604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.895751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.895776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.895884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.895910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.896051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.896076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.896229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.896254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.896372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.896398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.896547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.896572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.896713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.896738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.896874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.896905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.897044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.897073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.897200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1285470 is same with the state(5) to be set 00:33:40.931 [2024-07-24 09:19:18.897360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.897391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.897536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.897563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.897728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.897772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.897938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.897964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.898107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.898133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.898252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.898277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.898415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.898458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.898594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.898636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.898772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.898797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.898937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.898965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.899114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.899140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.899252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.899277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.899387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.899427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.899588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.899616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.899774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.899801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.899940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.899965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.900088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.900118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.900237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.900262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.900399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.900437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.900608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.900637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.900795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.900821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.900959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.900984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.901161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.901187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.901302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.901328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.901444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.901469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.901637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.901665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.901801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.931 [2024-07-24 09:19:18.901843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.931 qpair failed and we were unable to recover it. 00:33:40.931 [2024-07-24 09:19:18.901978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.902146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.902282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.902441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.902640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.902783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.902955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.902981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.903115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.903141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.903248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.903273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.903404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.903456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.903579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.903605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.903741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.903767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.903915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.903943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.904073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.904110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.904239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.904265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.904378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.904403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.904553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.904578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.904702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.904728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.904864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.904890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.905001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.905026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.905826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.905859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.906027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.906053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.906192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.906218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.906325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.906350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.906474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.906500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.906629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.906678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.906840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.906865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.907967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.907992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.908109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.908135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.908259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.908285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.908406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.908432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.908554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.908581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.908704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.908729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.908861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.908888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.909007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.909032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.909159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.932 [2024-07-24 09:19:18.909185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.932 qpair failed and we were unable to recover it. 00:33:40.932 [2024-07-24 09:19:18.909310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.909335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.909476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.909504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.909642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.909667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.909823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.909848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.909989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.910015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.910141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.910167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.910277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.910302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.910416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.910441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.910585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.910614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.910824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.910852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.911000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.911026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.911172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.911199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.911308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.911334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.911499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.911524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.911692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.911717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.911883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.911909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.912032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.912057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.912207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.912233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.912369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.912398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.912558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.912587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.912734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.912779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.912933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.912973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.913118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.913146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.913269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.913302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.913435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.913479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.913604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.913632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.913785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.913813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.913954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.913982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.914100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.914131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.914254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.914279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.914389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.914414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.914569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.914595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.914713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.914739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.914882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.914907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.915019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.915044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.915164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.915189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.915333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.915382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.915526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.915570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.916353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.916384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.916587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.916617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.916800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.916844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.916962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.916988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.917137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.917162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.917277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.917302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.917419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.917446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.917591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.917616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.933 [2024-07-24 09:19:18.917735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.933 [2024-07-24 09:19:18.917761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.933 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.917878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.917904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.918899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.918924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.919098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.919244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.919395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.919555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.919694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.919864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.919975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.920135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.920276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.920434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.920601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.920742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.920886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.920912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.921061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.921097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.921212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.921239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.921359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.921384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.921511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.921538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.921648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.921673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.921840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.921866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.922956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.922982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.923120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.923147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.923256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.923283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.923395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.923430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.923548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.923574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.923692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.923717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.923838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.923863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.934 [2024-07-24 09:19:18.924880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.934 [2024-07-24 09:19:18.924914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.934 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.925935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.925960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.926967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.926992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.927154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.927195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.927359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.927389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.927548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.927581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.927753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.927779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.927916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.927942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.928079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.928109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.928282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.928307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.928509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.928537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.928694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.928724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.928863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.928889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.928996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.929022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.929159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.929185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.929319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.929349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.929491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.929518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.929676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.929716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.929835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.929861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.929999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.930149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.930290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.930461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.930624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.930795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.930940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.930967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.931084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.931115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.931231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.931256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.931395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.931430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.931588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.931613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.931723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.931749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.931902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.931928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.932060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.932085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.935 [2024-07-24 09:19:18.932210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.935 [2024-07-24 09:19:18.932236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.935 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.932371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.932428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.932573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.932621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.932765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.932790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.932907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.932932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.933058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.933109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.933259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.933287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.933426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.933457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.933616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.933663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.933868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.933895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.934032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.934058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.934185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.934212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.934357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.934420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.934554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.934599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.934745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.934786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.934904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.934929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.935035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.935061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.935189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.935215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.935353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.935378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.935530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.935556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.935684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.935712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.935877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.935903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.936936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.936962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.937081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.937113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.937242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.937267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.937378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.937404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.937560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.937599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.937761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.937790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.937907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.937934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.938073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.938108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.938223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.938250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.938401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.938435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.938599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.938628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.938751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.938795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.938929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.938959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.939141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.939281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.939424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.939567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.939738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.939871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.939985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.936 [2024-07-24 09:19:18.940011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.936 qpair failed and we were unable to recover it. 00:33:40.936 [2024-07-24 09:19:18.940118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.940143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.940255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.940281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.940406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.940431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.940546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.940571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.940686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.940712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.940818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.940843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.940985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.941154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.941292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.941431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.941572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.941727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.941904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.941931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.942047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.942074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.942223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.942249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.942381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.942406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.942548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.942573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.942683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.942710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.942856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.942884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.943055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.943249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.943403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.943550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.943694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.943859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.943974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.944151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.944321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.944497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.944636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.944769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.944929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.944954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.945126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.945152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.945258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.945283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.945396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.945421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.945565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.945590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.945698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.945723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.945860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.945895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.946931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.946958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.947092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.947123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.947236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.947261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.947372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.947398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.947506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.947532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.947707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.947736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.947847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.947872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.937 [2024-07-24 09:19:18.948012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.937 [2024-07-24 09:19:18.948037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.937 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.948146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.948172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.948288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.948313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.948429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.948454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.948589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.948613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.948786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.948811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.948949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.948980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.949093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.949124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.949237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.949262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.949380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.949405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.949566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.949591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.949722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.949750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.949951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.950006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.950165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.950194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.950325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.950354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.950531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.950575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.950705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.950750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.950901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.950926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.951070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.951095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.951263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.951307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.951436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.951465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.951639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.951691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.951816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.951841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.952004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.952030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.952188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.952235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.952393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.952441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.952568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.952617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.952727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.952753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.952895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.952920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.953037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.953062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.953229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.953273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.953398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.953427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.953653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.953699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.953840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.953866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.954034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.954179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.954389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.954594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.954731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.954875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.954998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.955023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.955195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.955238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.955371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.955412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.955577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.955603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.955712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.955738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.955872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.955897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.956044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.956069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.956233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.956277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.956404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.956454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.956588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.956631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.956794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.956819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.956958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.956984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.957152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.957181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.957359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.938 [2024-07-24 09:19:18.957408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.938 qpair failed and we were unable to recover it. 00:33:40.938 [2024-07-24 09:19:18.957551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.957579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.957743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.957769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.957932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.957957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.958122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.958149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.958286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.958330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.958474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.958500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.958667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.958694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.958806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.958831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.958974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.959154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.959323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.959479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.959619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.959770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.959909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.959935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.960064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.960090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.960246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.960272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.960435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.960478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.960609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.960657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.960802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.960827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.960943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.960969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.961088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.961124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.961241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.961267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.961408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.961434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.961572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.961597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.961705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.961730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.961842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.961869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.962908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.962933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.963091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.963242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.963405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.963571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.963715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.963883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.963999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.964184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.964369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.964509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.964674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.964836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.939 [2024-07-24 09:19:18.964972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.939 [2024-07-24 09:19:18.964998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.939 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.965144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.965171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.965288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.965315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.965459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.965484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.965616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.965645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.965789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.965814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.965972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.965997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.966121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.966147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.966283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.966308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.966448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.966492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.966661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.966707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.966822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.966848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.966970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.966995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.967147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.967175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.967350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.967375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.967506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.967534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.967691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.967716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.967823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.967848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.967960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.967985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.968147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.968173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.968313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.968339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.968525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.968553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.968706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.968751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.968937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.968979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.969121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.969147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.969258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.969285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.969438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.969481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.969641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.969683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.969800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.969827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.969969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.969994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.970111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.970138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.970249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.970275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.970457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.970500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.970628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.970674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.970786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.970811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.970946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.970972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.971116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.971142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.971249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.971275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.971389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.971414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.971573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.971598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.971711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.971740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.971904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.971929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.972068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.972094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.972239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.972266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.972393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.972421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.972571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.972599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.972761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.972801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.972984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.973129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.973285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.973419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.973602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.973764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.973954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.940 [2024-07-24 09:19:18.973980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.940 qpair failed and we were unable to recover it. 00:33:40.940 [2024-07-24 09:19:18.974088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.974237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.974376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.974542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.974686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.974812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.974974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.974999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.975168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.975194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.975309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.975336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.975494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.975519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.975632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.975657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.975771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.975797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.975907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.975932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.976066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.976093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.976242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.976268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.976422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.976450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.976567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.976594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.976773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.976813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.976935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.976963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.977081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.977117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.977282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.977308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.977471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.977498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.977641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.977668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.977812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.977839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.977980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.978007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.978163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.978189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.978301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.978326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.978529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.978555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.978704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.978731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.978899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.978925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.979069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.979094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.979231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.979256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.979361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.979402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.979577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.979602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.979740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.979765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.979888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.979915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.980059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.980085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.980235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.980273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.980434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.980461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.980610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.980652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.980797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.980841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.980981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.981179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.981327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.981494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.981636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.981804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.981962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.981987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.982107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.982134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.982249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.982274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.982409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.982435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.982573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.982598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.982709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.982734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.982874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.941 [2024-07-24 09:19:18.982899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.941 qpair failed and we were unable to recover it. 00:33:40.941 [2024-07-24 09:19:18.983038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.983195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.983361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.983496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.983655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.983819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.983966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.983992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.984145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.984171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.984297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.984323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.984465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.984490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.984598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.984624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.984790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.984816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.984955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.984980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.985114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.985153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.985282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.985308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.985446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.985472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.985581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.985607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.985720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.985745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.985882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.985907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.986053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.986231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.986376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.986542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.986707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.986870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.986979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.987130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.987306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.987449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.987581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.987723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.987898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.987944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.988118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.988152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.988267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.988293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.988412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.988437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.988574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.988599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.988722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.988747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.988862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.988887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.989028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.989220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.989353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.989500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.989668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.989818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.989975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.990001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.990138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.990169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.990310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.990335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.990449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.990476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.990615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.990640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.942 qpair failed and we were unable to recover it. 00:33:40.942 [2024-07-24 09:19:18.990754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.942 [2024-07-24 09:19:18.990781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.943 qpair failed and we were unable to recover it. 00:33:40.943 [2024-07-24 09:19:18.990916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.943 [2024-07-24 09:19:18.990941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.943 qpair failed and we were unable to recover it. 00:33:40.943 [2024-07-24 09:19:18.991050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.943 [2024-07-24 09:19:18.991076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.943 qpair failed and we were unable to recover it. 00:33:40.943 [2024-07-24 09:19:18.991232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.943 [2024-07-24 09:19:18.991260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:40.943 qpair failed and we were unable to recover it. 00:33:40.943 [2024-07-24 09:19:18.991427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.943 [2024-07-24 09:19:18.991465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.943 qpair failed and we were unable to recover it. 00:33:40.943 [2024-07-24 09:19:18.991591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:40.943 [2024-07-24 09:19:18.991618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:40.943 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.991730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.991755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.991868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.991893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.992941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.992966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.993077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.993111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.993254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.993279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.993416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.993441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.993581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.993606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.993725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.993753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.993863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.994933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.994959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.995095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.995127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.995236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.995261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.995397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.995422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.995549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.995574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.995678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.995704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.226 [2024-07-24 09:19:18.995820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.226 [2024-07-24 09:19:18.995845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.226 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.995983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.996965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.996990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.997113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.997153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.997274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.997300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.997421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.997448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.997563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.997589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.997719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.997744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.997877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.997903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.998018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.998044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.998172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.998197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.998336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.998366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.998512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.998537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.998704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.998733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.998883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.998911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.999047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.999073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.999233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.999259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.999369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.999395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.999647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.999694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:18.999849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:18.999877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.000025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.000053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.000226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.000251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.000370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.000396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.000530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.000555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.000762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.000808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.000967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.000995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.001165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.001191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.001304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.001329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.001496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.001524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.001697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.001722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.001839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.001864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.002028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.227 [2024-07-24 09:19:19.002069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.227 qpair failed and we were unable to recover it. 00:33:41.227 [2024-07-24 09:19:19.002235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.002261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.002439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.002478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.002611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.002654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.002817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.002862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.003002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.003028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.003140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.003166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.003308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.003339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.003480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.003507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.003646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.003673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.003816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.003841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.004008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.004035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.004183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.004209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.004349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.004374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.004480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.004505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.004655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.004683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.004832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.004861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.005044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.005212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.005348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.005490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.005630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.005814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.005979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.006006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.006170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.006196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.006358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.006383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.006517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.006542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.006646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.006671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.006837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.006865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.007061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.007089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.007220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.007245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.007352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.007378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.007542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.007567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.007685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.007726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.007897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.007926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.008065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.008090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.008215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.008240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.008348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.008374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.228 [2024-07-24 09:19:19.008517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.228 [2024-07-24 09:19:19.008557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.228 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.008716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.008741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.008904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.008931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.009078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.009109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.009228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.009253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.009375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.009400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.009534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.009559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.009698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.009723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.009868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.009910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.010079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.010126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.010302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.010330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.010546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.010571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.010694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.010720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.010860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.010886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.011966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.011991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.012190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.012234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.012369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.012399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.012555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.012588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.012775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.012801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.012936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.012961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.013094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.013148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.013288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.013316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.013437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.013465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.013622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.013647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.013836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.013864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.014041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.014068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.014231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.014257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.014421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.014446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.014582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.014607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.014751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.014778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.014933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.014962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.015086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.229 [2024-07-24 09:19:19.015117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.229 qpair failed and we were unable to recover it. 00:33:41.229 [2024-07-24 09:19:19.015260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.015286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.015407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.015432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.015571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.015596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.015709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.015736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.015874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.015899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.016012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.016037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.016205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.016248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.016379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.016422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.016567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.016609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.016774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.016817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.016961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.016986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.017126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.017151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.017264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.017295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.017416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.017444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.017616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.017645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.017782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.017807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.017944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.017969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.018100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.018146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.018296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.018323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.018481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.018509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.018666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.018857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.018885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.019058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.019082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.019213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.019251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.019409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.019438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.019612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.019658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.019781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.019810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.019936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.019963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.020120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.020147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.020289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.020315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.230 [2024-07-24 09:19:19.020434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.230 [2024-07-24 09:19:19.020459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.230 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.020636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.020664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.020813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.020843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.021020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.021048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.021185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.021212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.021346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.021372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.021532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.021574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.021719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.021748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.021872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.021900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.022077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.022111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.022255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.022281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.022480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.022531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.022678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.022706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.022856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.022884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.023049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.023076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.023215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.023241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.023413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.023441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.023604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.023632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.023756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.023785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.023945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.023971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.024112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.024168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.024338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.024365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.024545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.024576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.024718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.024745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.024904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.024960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.025073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.025098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.025213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.025237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.025389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.025415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.025628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.025656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.025807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.025835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.025968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.025993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.026106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.026132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.026274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.026299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.026467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.026494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.026690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.026717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.026944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.026994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.027174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.027201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.027344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.027369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.231 qpair failed and we were unable to recover it. 00:33:41.231 [2024-07-24 09:19:19.027510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.231 [2024-07-24 09:19:19.027550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.027708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.027733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.027846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.027871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.028004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.028029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.028198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.028223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.028361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.028386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.028545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.028570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.028779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.028805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.028956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.028981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.029123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.029149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.029263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.029288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.029448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.029473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.029612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.029660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.029813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.029841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.029979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.030005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.030165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.030204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.030334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.030361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.030497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.030522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.030639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.030680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.030841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.030867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.031043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.031068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.031216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.031243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.031386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.031411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.031575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.031600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.031753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.031794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.031988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.032016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.032151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.032176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.032289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.032315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.032439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.032465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.032648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.032673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.032829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.032857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.033017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.033043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.033185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.033210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.033363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.033402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.033549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.033578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.033738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.033764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.033901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.033943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.034109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.034140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.232 qpair failed and we were unable to recover it. 00:33:41.232 [2024-07-24 09:19:19.034284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.232 [2024-07-24 09:19:19.034309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.034475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.034502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.034658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.034686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.034838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.034863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.034997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.035026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.035209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.035234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.035356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.035380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.035494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.035519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.035715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.035743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.035898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.035923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.036035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.036060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.036234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.036259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.036373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.036398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.036506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.036531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.036718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.036745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.036913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.036938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.037107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.037149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.037280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.037304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.037439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.037464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.037614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.037642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.037796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.037824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.037961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.037986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.038087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.038119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.038261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.038286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.038425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.038450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.038563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.038588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.038693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.038718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.038856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.038881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.039036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.039093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.039265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.039292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.039409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.039435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.039552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.039578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.039715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.039740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.039859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.039886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.040028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.040054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.040170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.040195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.040304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.040329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.040438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.040463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.233 [2024-07-24 09:19:19.040599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.233 [2024-07-24 09:19:19.040624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.233 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.040760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.040785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.040918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.040946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.041063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.041089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.041205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.041231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.041393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.041418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.041583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.041611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.041745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.041771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.041955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.041982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.042128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.042173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.042311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.042336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.042489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.042516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.042633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.042660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.042794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.042818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.042924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.042951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.043054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.043095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.043241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.043383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.043408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.043573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.043599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.043759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.043784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.043921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.043946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.044084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.044130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.044257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.044284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.044425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.044466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.044609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.044635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.044767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.044792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.044923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.044948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.045120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.045146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.045285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.045310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.045464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.045490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.045637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.045665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.045821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.045845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.045983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.046023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.046204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.046230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.046370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.046395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.046512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.046536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.046688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.046713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.234 [2024-07-24 09:19:19.046851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.234 [2024-07-24 09:19:19.046876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.234 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.046986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.047026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.047197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.047222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.047362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.047387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.047497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.047537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.047697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.047723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.047876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.047902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.048041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.048067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.048196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.048222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.048365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.048390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.048544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.048570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.048735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.048759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.048924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.048949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.049128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.049171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.049317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.049342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.049459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.049485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.049601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.049626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.049742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.049767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.049904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.049928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.050111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.050139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.050263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.050292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.050408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.050433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.050571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.050611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.050771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.050796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.050935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.050961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.051132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.051177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.051299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.051325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.051464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.051489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.051670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.051697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.051845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.051873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.052023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.052048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.052198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.052224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.052329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.052354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.235 [2024-07-24 09:19:19.052516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.235 [2024-07-24 09:19:19.052541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.235 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.052686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.052712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.052828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.052854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.052994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.053022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.053205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.053231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.053370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.053409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.053565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.053589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.053750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.053775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.053888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.053913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.054021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.054046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.054171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.054211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.054356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.054400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.054560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.054585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.054737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.054767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.054895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.054929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.055062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.055087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.055259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.055285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.055444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.055473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.055634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.055659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.055809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.055850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.055974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.056001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.056173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.056199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.056361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.056401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.056577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.056604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.056765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.056790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.056905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.056932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.057049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.057075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.057222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.057248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.057434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.057462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.057638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.057666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.057798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.057825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.057959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.057985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.058169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.058195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.058337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.058362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.058509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.058536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.058678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.058706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.058861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.058886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.059064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.059092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.236 [2024-07-24 09:19:19.059253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.236 [2024-07-24 09:19:19.059279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.236 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.059413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.059438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.059594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.059619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.059759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.059805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.059961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.059986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.060118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.060156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.060285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.060319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.060469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.060496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.060614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.060640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.060778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.060808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.060994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.061166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.061310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.061472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.061601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.061782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.061965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.061992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.062143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.062168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.062300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.062325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.062482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.062506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.062646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.062671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.062832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.062857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.063027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.063051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.063191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.063216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.063402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.063430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.063586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.063611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.063787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.063813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.063934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.063959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.064096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.064141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.064258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.064283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.064446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.064478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.064621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.064646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.064809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.064834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.064985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.065013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.065165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.065191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.065306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.065331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.065473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.065498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.065636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.065661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.237 [2024-07-24 09:19:19.065828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.237 [2024-07-24 09:19:19.065872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.237 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.066010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.066040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.066217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.066245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.066386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.066412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.066594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.066623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.066783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.066808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.066968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.066996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.067135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.067178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.067294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.067319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.067436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.067461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.067611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.067638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.067807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.067832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.067976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.068115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.068277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.068446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.068614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.068791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.068962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.068988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.069186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.069216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.069332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.069357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.069465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.069490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.069651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.069678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.069818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.069843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.070012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.070039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.070236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.070264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.070404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.070429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.070581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.070609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.070780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.070808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.070965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.070993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.071181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.071207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.071314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.071340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.071483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.071509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.071652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.071677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.071878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.071904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.072029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.072055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.072196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.072222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.072362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.238 [2024-07-24 09:19:19.072388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.238 qpair failed and we were unable to recover it. 00:33:41.238 [2024-07-24 09:19:19.072534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.072560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.072700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.072724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.072901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.072927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.073080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.073111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.073251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.073276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.073391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.073416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.073558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.073583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.073717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.073742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.073858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.073884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.074025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.074051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.074234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.074273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.074413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.074441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.074580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.074606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.074714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.074739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.074854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.074879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.075057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.075084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.075228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.075253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.075368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.075393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.075532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.075557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.075738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.075767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.075914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.075943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.076124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.076156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.076267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.076292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.076479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.076507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.076668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.076693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.076808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.076849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.077965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.077990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.078132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.078158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.078297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.078322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.078480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.078508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.078668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.078694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.078796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.239 [2024-07-24 09:19:19.078821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.239 qpair failed and we were unable to recover it. 00:33:41.239 [2024-07-24 09:19:19.078988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.079016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.079163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.079190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.079335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.079362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.079504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.079533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.079717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.079742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.079858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.079901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.080014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.080042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.080185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.080211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.080351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.080379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.080513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.080543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.080710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.080740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.080894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.080923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.081113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.081157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.081321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.081346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.081450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.081493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.081643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.081670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.081835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.081860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.082038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.082064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.082228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.082253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.082369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.082395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.082535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.082561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.082697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.082722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.082833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.082874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.083045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.083088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.083242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.083269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.083435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.083460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.083694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.083720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.083838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.083864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.084047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.084072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.084247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.084275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.084434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.084462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.084612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.084637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.240 [2024-07-24 09:19:19.084777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.240 [2024-07-24 09:19:19.084819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.240 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.084970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.084999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.085178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.085344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.085369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.085504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.085530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.085686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.085725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.085961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.086011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.086136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.086163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.086283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.086308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.086483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.086539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.086721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.086749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.086919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.086962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.087150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.087188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.087358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.087400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.087628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.087674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.087859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.087909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.088053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.088083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.088230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.088255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.088386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.088411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.088598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.088626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.088809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.088837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.088953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.088982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.089134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.089178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.089328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.089367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.089632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.089683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.089846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.089872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.090017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.090045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.090217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.090244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.090383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.090408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.090580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.090608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.090784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.090812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.090939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.090966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.091176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.091203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.091314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.091339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.091450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.091475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.091632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.091658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.091798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.091826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.092007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.092035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.241 [2024-07-24 09:19:19.092216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.241 [2024-07-24 09:19:19.092241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.241 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.092381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.092406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.092589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.092615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.092807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.092862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.093062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.093089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.093257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.093282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.093414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.093439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.093674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.093719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.093877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.093905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.094066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.094097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.094249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.094287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.094429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.094473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.094631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.094674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.094866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.094894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.095018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.095044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.095188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.095215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.095370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.095398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.095537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.095579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.095738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.095780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.095947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.095973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.096082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.096117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.096254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.096282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.096471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.096524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.096684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.096726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.096890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.096916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.097053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.097077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.097216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.097258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.097402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.097429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.097566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.097593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.097759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.097784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.097929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.097954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.098084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.098145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.098278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.098308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.098498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.098527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.098711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.098741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.098880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.098905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.099018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.099044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.099191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.099218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.242 qpair failed and we were unable to recover it. 00:33:41.242 [2024-07-24 09:19:19.099358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.242 [2024-07-24 09:19:19.099400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.099548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.099576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.099728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.099757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.099906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.099935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.100107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.100133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.100277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.100302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.100430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.100458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.100603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.100630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.100842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.100899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.101046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.101074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.101225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.101252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.101423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.101450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.101577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.101621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.101762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.101804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.101915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.101939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.102077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.102107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.102239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.102264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.102404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.102430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.102576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.102601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.102738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.102763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.102920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.102946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.103058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.103083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.103244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.103282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.103451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.103480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.103645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.103671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.103813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.103838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.104017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.104045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.104159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.104186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.104359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.104388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.104542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.104584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.104711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.104754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.104885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.104910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.105071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.105117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.105303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.105330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.105472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.105498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.105676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.105746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.105921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.105981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.106144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.243 [2024-07-24 09:19:19.106171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.243 qpair failed and we were unable to recover it. 00:33:41.243 [2024-07-24 09:19:19.106333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.106360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.106491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.106516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.106682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.106708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.106853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.106907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.107057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.107082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.107249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.107274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.107433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.107460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.107571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.107598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.107731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.107772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.107947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.107975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.108100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.108136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.108318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.108342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.108483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.108514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.108688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.108731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.108988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.109054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.109230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.109258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.109392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.109420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.109574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.109602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.109747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.109777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.109931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.109959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.110084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.110116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.110250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.110275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.110433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.110461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.110604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.110632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.110784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.110811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.110988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.111023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.111175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.111201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.111341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.111381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.244 qpair failed and we were unable to recover it. 00:33:41.244 [2024-07-24 09:19:19.111536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.244 [2024-07-24 09:19:19.111563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.111743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.111792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.111921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.111950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.112118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.112161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.112304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.112329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.112467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.112520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.112677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.112702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.112894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.112921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.113072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.113099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.113267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.113293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.113451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.113477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.113622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.113648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.113812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.113840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.114000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.114025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.114177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.114216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.114364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.114409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.114578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.114607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.114803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.114829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.114975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.115001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.115131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.115158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.115319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.115346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.115505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.115532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.115705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.115733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.115878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.115933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.116120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.116166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.116284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.116310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.116440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.116482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.116661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.116689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.116838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.116867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.117016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.117044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.117207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.117233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.117376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.117401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.117585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.117637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.117792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.117819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.117935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.118140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.118183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.245 qpair failed and we were unable to recover it. 00:33:41.245 [2024-07-24 09:19:19.118299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.245 [2024-07-24 09:19:19.118324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.118481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.118508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.118663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.118691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.118807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.118835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.118989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.119017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.119206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.119232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.119349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.119393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.119604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.119631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.119816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.119842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.119990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.120171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.120317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.120499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.120664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.120805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.120942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.120966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.121082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.121114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.121260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.121285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.121444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.121472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.121625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.121650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.121789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.121814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.121975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.122017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.122151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.122176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.122325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.122351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.122518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.122546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.122699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.122724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.122854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.122894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.123048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.123075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.123220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.123245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.123373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.123401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.123585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.123613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.123770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.123795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.123927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.123967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.124144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.124173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.124306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.124331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.124488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.124530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.124693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.124718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.124880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.124905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.246 [2024-07-24 09:19:19.125043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.246 [2024-07-24 09:19:19.125068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.246 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.125211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.125237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.125348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.125373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.125512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.125554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.125730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.125758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.125929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.125956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.126093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.126126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.126263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.126289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.126430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.126455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.126602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.126630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.126780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.126807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.126964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.126991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.127154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.127180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.127293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.127318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.127452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.127477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.127631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.127658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.127822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.127847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.127976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.128001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.128117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.128147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.128290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.128315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.128485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.128510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.128656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.128684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.128835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.128863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.128990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.129135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.129264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.129425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.129590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.129729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.129912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.129937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.130092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.130127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.130298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.130326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.130492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.130518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.130699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.130727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.130875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.130902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.131087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.131119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.131229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.131254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.131389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.131413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.247 [2024-07-24 09:19:19.131571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.247 [2024-07-24 09:19:19.131595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.247 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.131700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.131742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.131867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.131895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.132064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.132092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.132293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.132319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.132491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.132516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.132653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.132678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.132838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.132870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.133003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.133031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.133196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.133222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.133362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.133404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.133583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.133611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.133770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.133795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.133931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.133974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.134149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.134177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.134317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.134343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.134449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.134474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.134635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.134662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.134814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.134839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.134946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.134971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.135143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.135172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.135310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.135335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.135485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.135510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.135696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.135721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.135857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.135882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.135992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.136018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.136188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.136217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.136370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.136394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.136540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.136580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.136757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.136784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.136963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.136988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.137129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.137155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.137263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.137288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.137426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.137451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.137635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.137663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.137820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.137849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.138008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.138033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.138166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.138208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.248 [2024-07-24 09:19:19.138362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.248 [2024-07-24 09:19:19.138389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.248 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.138521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.138546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.138666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.138691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.138840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.138866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.139009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.139052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.139193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.139219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.139358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.139383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.139503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.139527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.139632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.139657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.139826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.139851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.140036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.140062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.140209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.140235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.140349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.140374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.140508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.140532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.140670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.140695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.140864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.140903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.141046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.141072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.141208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.141234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.141344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.141369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.141503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.141528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.141666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.141691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.141826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.141851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.142013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.142038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.142195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.142223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.142372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.142400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.142557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.142582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.142716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.142759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.142935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.142963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.143186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.143212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.143350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.143391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.143572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.143599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.143782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.143807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.143929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.249 [2024-07-24 09:19:19.143956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.249 qpair failed and we were unable to recover it. 00:33:41.249 [2024-07-24 09:19:19.144114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.144142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.144326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.144351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.144541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.144569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.144750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.144777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.144906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.144937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.145076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.145108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.145277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.145305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.145466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.145491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.145665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.145693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.145819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.145848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.146007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.146034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.146174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.146200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.146340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.146365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.146539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.146563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.146701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.146743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.146906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.146931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.147071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.147096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.147214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.147257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.147422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.147450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.147586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.147612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.147790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.147818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.147978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.148003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.148164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.148190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.148308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.148350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.148490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.148515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.148678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.148703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.148819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.149960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.149985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.150136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.150162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.150287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.150316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.150471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.150496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.150636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.150661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.250 qpair failed and we were unable to recover it. 00:33:41.250 [2024-07-24 09:19:19.150811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.250 [2024-07-24 09:19:19.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.150997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.151022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.151137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.151162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.151305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.151347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.151484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.151512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.151674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.151699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.151809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.151834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.151988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.152013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.152238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.152264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.152415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.152456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.152609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.152636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.152768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.152792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.152897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.152922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.153085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.153121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.153283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.153308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.153421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.153446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.153577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.153602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.153745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.153770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.153877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.153901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.154036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.154064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.154228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.154257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.154424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.154449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.154585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.154610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.154779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.154804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.154946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.154971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.155114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.155139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.155287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.155313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.155429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.155472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.155589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.155617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.155748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.155772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.155916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.155941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.156049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.156074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.156189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.156214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.156347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.156372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.156557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.156582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.156687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.156712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.156862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.156903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.157063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.157088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.251 qpair failed and we were unable to recover it. 00:33:41.251 [2024-07-24 09:19:19.157224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.251 [2024-07-24 09:19:19.157249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.157383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.157425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.157573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.157600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.157734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.157759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.157901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.157925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.158961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.158989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.159145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.159171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.159311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.159336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.159474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.159499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.159608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.159633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.159755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.159780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.159942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.159969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.160129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.160155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.160270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.160312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.160435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.160462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.160598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.160623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.160736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.160761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.160890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.160922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.161087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.161118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.161249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.161273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.161426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.161454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.161636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.161661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.161778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.161821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.161957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.161984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.162135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.162160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.162296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.162321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.162433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.162458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.162592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.162616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.162771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.162799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.162922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.162950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.163083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.163114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.163238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.163263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.252 [2024-07-24 09:19:19.163454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.252 [2024-07-24 09:19:19.163479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.252 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.163614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.163639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.163746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.163771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.163876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.163901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.164008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.164033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.164168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.164194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.164354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.164381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.164532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.164557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.164692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.164717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.164853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.164893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.165952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.165977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.166120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.166163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.166296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.166325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.166459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.166485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.166600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.166625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.166766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.166791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.166925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.166952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.167072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.167100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.167260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.167285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.167436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.167460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.167618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.167643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.167760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.167785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.167930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.167955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.168061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.168111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.168267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.168295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.168438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.168463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.168582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.168606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.168728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.168753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.168912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.168936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.169064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.169093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.169263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.169289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.169396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.169421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.169555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.253 [2024-07-24 09:19:19.169580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.253 qpair failed and we were unable to recover it. 00:33:41.253 [2024-07-24 09:19:19.169739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.169771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.169929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.169955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.170065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.170089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.170274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.170300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.170451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.170476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.170581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.170623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.170757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.170785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.170948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.170973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.171089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.171121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.171231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.171257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.171369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.171394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.171537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.171585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.171733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.171761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.171927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.171951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.172065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.172090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.172239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.172264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.172402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.172428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.172547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.172572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.172707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.172732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.172882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.172907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.173044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.173069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.173236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.173262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.173423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.173448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.173587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.173628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.173776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.173804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.173944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.173969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.174114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.174139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.174300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.174326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.174446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.174472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.174630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.174655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.174793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.174818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.174974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.174999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.254 [2024-07-24 09:19:19.175135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.254 [2024-07-24 09:19:19.175177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.254 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.175311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.175339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.175474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.175499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.175620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.175645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.175781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.175809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.175967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.175991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.176106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.176132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.176296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.176326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.176470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.176495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.176617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.176642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.176840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.176865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.177002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.177030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.177173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.177199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.177317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.177343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.177521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.177546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.177697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.177725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.177898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.177923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.178058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.178083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.178201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.178242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.178418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.178446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.178581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.178606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.178736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.178760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.178901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.178926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.179045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.179070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.179202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.179229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.179425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.179453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.179588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.179613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.179728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.179753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.179869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.179894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.180037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.180061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.180224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.180252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.180380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.180408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.180542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.180567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.180728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.180753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.180878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.180906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.181038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.181062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.181205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.181253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.181385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.181413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.255 [2024-07-24 09:19:19.181595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.255 [2024-07-24 09:19:19.181620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.255 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.181772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.181800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.181948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.181976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.182131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.182157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.182318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.182343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.182503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.182528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.182636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.182661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.182785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.182823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.183957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.183984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.184194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.184222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.184359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.184385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.184541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.184568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.184710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.184735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.184880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.184923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.185068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.185096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.185304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.185330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.185515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.185543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.185673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.185700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.185840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.185865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.185995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.186041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.186210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.186236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.186369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.186394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.186506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.186531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.186721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.186745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.186883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.186908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.187024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.187065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.187208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.187233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.187377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.187402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.187550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.187578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.187697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.187725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.187870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.187895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.188019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.256 [2024-07-24 09:19:19.188057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.256 qpair failed and we were unable to recover it. 00:33:41.256 [2024-07-24 09:19:19.188177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.188204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.188326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.188351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.188490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.188515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.188654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.188679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.188862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.188887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.189007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.189052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.189227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.189253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.189395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.189420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.189542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.189568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.189735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.189764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.189899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.189925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.190071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.190096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.190296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.190321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.190463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.190488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.190649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.190678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.190830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.190858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.190999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.191140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.191306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.191451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.191586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.191771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.191959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.191984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.192093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.192130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.192273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.192299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.192414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.192439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.192615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.192825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.192854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.192965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.192990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.193126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.193152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.193269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.193293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.193402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.193428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.193547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.193572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.193688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.193713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.193848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.193874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.194007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.194032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.194219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.194245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.194385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.257 [2024-07-24 09:19:19.194410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.257 qpair failed and we were unable to recover it. 00:33:41.257 [2024-07-24 09:19:19.194564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.194589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.194752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.194792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.194942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.194968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.195112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.195137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.195253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.195278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.195397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.195422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.195534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.195560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.195692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.195721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.195859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.195885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.196001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.196027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.196204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.196231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.196374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.196399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.196538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.196565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.196716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.196745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.196895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.196921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.197107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.197135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.197297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.197323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.197454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.197480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.197594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.197621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.197766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.197794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.197954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.197980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.198091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.198123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.198238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.198263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.198377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.198402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.198513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.198539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.198679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.198704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.198853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.198878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.199012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.199056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.199192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.199218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.199320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.199350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.199488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.199513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.258 [2024-07-24 09:19:19.199643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.258 [2024-07-24 09:19:19.199671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.258 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.199803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.199828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.199936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.199961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.200095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.200136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.200266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.200292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.200483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.200510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.200668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.200696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.200831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.200856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.200968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.200994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.201100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.201136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.201247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.201272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.201406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.201431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.201572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.201601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.201758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.201783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.201894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.201919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.202078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.202115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.202275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.202301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.202416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.202441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.202583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.202608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.202779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.202805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.202920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.202946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.203058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.203084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.203254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.203279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.203437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.203465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.203585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.203613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.203758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.203783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.203927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.203969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.204098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.204140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.204301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.204326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.204468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.204510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.204670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.204695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.204811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.204852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.205032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.205060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.205203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.205229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.205350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.205375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.205512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.205556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.205719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.205747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.205929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.205954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.259 [2024-07-24 09:19:19.206061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.259 [2024-07-24 09:19:19.206114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.259 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.206304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.206329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.206442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.206467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.206578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.206605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.206781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.206806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.206938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.206963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.207082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.207127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.207277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.207305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.207469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.207495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.207637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.207662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.207810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.207838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.207994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.208019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.208157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.208183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.208300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.208326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.208497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.208523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.208655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.208683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.208809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.208837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.208999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.209024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.209142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.209168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.209305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.209330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.209497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.209522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.209677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.209704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.209858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.209886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.210946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.210988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.211111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.211160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.211288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.211314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.211431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.211458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.211593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.211618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.211762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.211791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.211948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.211973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.212156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.212186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.212346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.260 [2024-07-24 09:19:19.212374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.260 qpair failed and we were unable to recover it. 00:33:41.260 [2024-07-24 09:19:19.212513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.212540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.212730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.212759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.212910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.212943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.213078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.213125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.213246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.213272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.213416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.213554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.213580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.213733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.213758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.213929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.213957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.214086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.214117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.214249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.214275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.214438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.214466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.214599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.214624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.214742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.214768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.214896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.214924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.215063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.215088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.215239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.215265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.215401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.215430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.215594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.215620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.215764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.215807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.215967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.215993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.216126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.216153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.216316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.216344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.216459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.216486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.216619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.216644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.216776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.216802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.216934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.216962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.217089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.217138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.217266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.217292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.217447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.217503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.217621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.217647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.217795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.217837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.217991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.218015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.218137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.218163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.218347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.218374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.218494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.218523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.218666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.218692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.218806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.261 [2024-07-24 09:19:19.218832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.261 qpair failed and we were unable to recover it. 00:33:41.261 [2024-07-24 09:19:19.219022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.219211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.219348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.219531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.219680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.219820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.219963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.219987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.220130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.220157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.220267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.220310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.220477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.220514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.220661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.220686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.220808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.220833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.220946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.220972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.221079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.221116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.221287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.221329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.221450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.221478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.221634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.221660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.221773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.221798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.221935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.221963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.222113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.222138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.222279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.222305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.222478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.222517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.222716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.222741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.222855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.222880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.223032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.223060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.223204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.223230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.223345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.223388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.223542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.223572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.223702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.223726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.223871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.223895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.224030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.224059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.224229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.224255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.224413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.224441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.224576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.224605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.224787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.224813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.224952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.224977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.225136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.225164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.262 [2024-07-24 09:19:19.225302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.262 [2024-07-24 09:19:19.225327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.262 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.225439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.225464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.225578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.225605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.225723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.225748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.225889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.225914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.226095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.226145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.226293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.226320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.226459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.226507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.226665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.226691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.226833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.226858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.226992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.227017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.227178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.227204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.227358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.227384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.227525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.227550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.227681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.227709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.227865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.227890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.228048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.228076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.228241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.228268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.228412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.228439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.228597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.228626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.228756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.228785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.228956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.228982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.229119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.263 [2024-07-24 09:19:19.229164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.263 qpair failed and we were unable to recover it. 00:33:41.263 [2024-07-24 09:19:19.229308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.229336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.229468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.229494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.229609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.229634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.229801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.229826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.229943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.229968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.230083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.230116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.230284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.230312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.230475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.230500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.230610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.230637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.230771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.230797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.230970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.230996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.231118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.231160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.231286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.231314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.231471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.231496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.231674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.231702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.231824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.231852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.232002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.232028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.232157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.232203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.232361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.232392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.232550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.232576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.232758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.232786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.232916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.232944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.233077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.233108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.233232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.233257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.233392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.233427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.233565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.233591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.233703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.233730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.233898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.233925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.234083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.234116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.234246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.234271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.234402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.234430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.234615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.234640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.234773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.234801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.234945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.234972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.235097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.235127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.235265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.235290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.235458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.235485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.264 [2024-07-24 09:19:19.235615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.264 [2024-07-24 09:19:19.235641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.264 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.235801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.235843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.235954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.235981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.236139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.236167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.236276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.236302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.236471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.236503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.236662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.236687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.236822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.236867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.236986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.237162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.237296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.237442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.237602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.237790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.237968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.237997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.238127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.238155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.238267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.238294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.238460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.238487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.238688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.238716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.238847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.238876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.239014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.239043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.239192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.239219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.239332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.239375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.239538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.239564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.239727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.239755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.239899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.239928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.240066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.240095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.240252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.240283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.240433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.240459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.240570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.240614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.240758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.240786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.240919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.240962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.241126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.241169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.241307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.241332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.241478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.241503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.241656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.241684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.241845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.241887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.242041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.265 [2024-07-24 09:19:19.242066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.265 qpair failed and we were unable to recover it. 00:33:41.265 [2024-07-24 09:19:19.242213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.242239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.242366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.242392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.242534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.242564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.242688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.242716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.242866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.242895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.243049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.243076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.243218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.243244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.243358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.243384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.243537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.243580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.243698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.243726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.243882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.243910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.244037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.244062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.244199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.244238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.244398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.244429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.244605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.244634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.244767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.244808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.244949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.244981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.245110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.245160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.245274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.245299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.245425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.245453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.245587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.245629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.245759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.245787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.245942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.245971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.246097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.246132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.246290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.246315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.246455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.246483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.246662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.246687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.246802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.246828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.246973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.246999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.247132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.247159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.247288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.247316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.247484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.247510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.247646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.247672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.247783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.247809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.247924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.247950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.248079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.248113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.248291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.248317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.248441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.266 [2024-07-24 09:19:19.248467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.266 qpair failed and we were unable to recover it. 00:33:41.266 [2024-07-24 09:19:19.248588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.248614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.248774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.248800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.248918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.248943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.249089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.249120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.249227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.249253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.249402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.249440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.249591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.249617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.249757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.249783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.249940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.249965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.250125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.250151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.250289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.250314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.250486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.250535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.250687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.250712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.250849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.250890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.251041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.251069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.251216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.251242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.251375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.251414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.251584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.251611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.251796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.251829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.251945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.251988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.252164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.252190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.252305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.252330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.252456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.252482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.252655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.252696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.252836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.252862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.252981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.253188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.253317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.253471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.253632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.253810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.253951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.253976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.254095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.254125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.254262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.254287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.254449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.254477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.254633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.254660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.254833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.254858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.254973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.267 [2024-07-24 09:19:19.254998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.267 qpair failed and we were unable to recover it. 00:33:41.267 [2024-07-24 09:19:19.255109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.255134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.255270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.255294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.255399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.255424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.255620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.255644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.255761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.255786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.255931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.255959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.256133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.256172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.256346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.256380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.256507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.256544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.256712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.256761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.256945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.256971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.257131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.257161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.257296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.257321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.257467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.257492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.257623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.257658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.257862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.257910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.258062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.258090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.258239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.258268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.258385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.258411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.258522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.258549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.258670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.258695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.258866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.258895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.259082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.259269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.259414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.259580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.259714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.259883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.259997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.260025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.260166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.260193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.260308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.260334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.260442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.268 [2024-07-24 09:19:19.260467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.268 qpair failed and we were unable to recover it. 00:33:41.268 [2024-07-24 09:19:19.260604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.260629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.260738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.260764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.260873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.260902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.261908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.261932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.262069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.262107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.262269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.262295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.262427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.262455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.262597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.262623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.262775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.262801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.262915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.262940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.263072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.263111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.263242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.263269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.263402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.263427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.263590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.263618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.263760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.263786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.263920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.263947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.264062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.264090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.264216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.264243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.264351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.264377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.264523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.264549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.264689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.264714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.264877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.264902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.265036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.265060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.265221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.265251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.265399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.265442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.265599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.265624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.265767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.265792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.265925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.265950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.266166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.266192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.266308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.266334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.266462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.266487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.266630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.269 [2024-07-24 09:19:19.266658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.269 qpair failed and we were unable to recover it. 00:33:41.269 [2024-07-24 09:19:19.266795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.266820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.266958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.266986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.267183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.267209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.267315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.267340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.267513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.267537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.267685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.267713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.267896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.267921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.268061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.268086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.268208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.268233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.268367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.268391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.268501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.268525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.268688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.268716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.268865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.268889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.269029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.269053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.269236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.269262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.269372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.269397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.269532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.269556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.269696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.269724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.269881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.269910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.270051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.270093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.270246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.270272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.270432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.270457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.270639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.270667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.270789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.270816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.271029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.271056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.271199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.271225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.271365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.271390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.271523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.271548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.271655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.271679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.271843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.271871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.272049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.272225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.272394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.272548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.272705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.272846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.272982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.273008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.273141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.273167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.270 [2024-07-24 09:19:19.273277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.270 [2024-07-24 09:19:19.273302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.270 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.273447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.273472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.273582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.273625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.273802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.273830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.273985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.274149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.274313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.274446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.274612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.274775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.274942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.274967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.275111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.275136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.275268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.275292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.275430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.275455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.275614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.275642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.275828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.275853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.276016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.276043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.276217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.276257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.276380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.276419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.276585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.276613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.276757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.276783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.276956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.276985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.277142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.277169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.277310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.277336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.277475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.277518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.277676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.277702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.277823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.277849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.277959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.277983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.278148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.278173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.278313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.278339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.278475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.278501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.278648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.278673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.278788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.278812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.278995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.279022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.279150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.279179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.279319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.279344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.279485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.279527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.279691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.279716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.279850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.271 [2024-07-24 09:19:19.279891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.271 qpair failed and we were unable to recover it. 00:33:41.271 [2024-07-24 09:19:19.280071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.280099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.280254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.280279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.280389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.280415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.280595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.280619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.280759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.280785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.280895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.280935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.281130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.281156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.281293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.281318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.281479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.281522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.281660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.281689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.281827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.281854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.282019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.282062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.282238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.282264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.282404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.282431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.282586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.282615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.282764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.282792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.282924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.282951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.283085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.283116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.283227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.283253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.283416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.283441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.283544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.283588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.283738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.283766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.283891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.283922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.284038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.284064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.284250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.284277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.284420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.284445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.284582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.284608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.284783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.284810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.284924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.284949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.285084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.285114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.285228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.285255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.285395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.285421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.285606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.285637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.285763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.285792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.285925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.285968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.272 qpair failed and we were unable to recover it. 00:33:41.272 [2024-07-24 09:19:19.286129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.272 [2024-07-24 09:19:19.286155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.286270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.286295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.286457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.286481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.286612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.286640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.286784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.286812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.286973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.286997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.287132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.287174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.287357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.287382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.287523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.287550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.287660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.287685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.287827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.287851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.287965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.287989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.288114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.288138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.288251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.288276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.288390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.288419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.288562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.288603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.288756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.288796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.288907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.288932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.289095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.289127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.289241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.289266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.289378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.289402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.289562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.289603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.289762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.289787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.289942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.289970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.290176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.273 [2024-07-24 09:19:19.290216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.273 qpair failed and we were unable to recover it. 00:33:41.273 [2024-07-24 09:19:19.290373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.290430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.290599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.290626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.290735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.290778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.290943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.290972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.291153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.291178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.291310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.291336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.291474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.291515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.291674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.291700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.291809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.291834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.292007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.292034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.292195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.292223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.292376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.292405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.292567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.292592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.292713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.292738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.292917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.292945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.293099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.293129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.293271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.293296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.293402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.293444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.293625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.293678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.293831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.293857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.293991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.294032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.294148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.294179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.294316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.294342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.294480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.294522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.294710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.294770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.294933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.294959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.295061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.295086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.295263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.295289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.295429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.295454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.295564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.295594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.295749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.295778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.295953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.295981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.296148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.296187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.296304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.296330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.296495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.296519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.296640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.296665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.296776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.296799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.274 qpair failed and we were unable to recover it. 00:33:41.274 [2024-07-24 09:19:19.296940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.274 [2024-07-24 09:19:19.296964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.297111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.297136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.297302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.297326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.297437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.297461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.297571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.297595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.297756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.297783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.297950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.297975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.298157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.298186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.298322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.298346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.298478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.298503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.298619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.298643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.298747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.298771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.298902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.298926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.299061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.299085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.299209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.299235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.299369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.299393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.299534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.299558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.299674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.299698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.299813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.299852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.300013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.300057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.300245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.300272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.300417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.300442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.300595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.300620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.300758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.300800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.300983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.301013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.301180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.301205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.301338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.301363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.301563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.301606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.301746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.301771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.301905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.301932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.302122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.302148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.302256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.302281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.302470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.302526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.302689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.302717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.302902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.302930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.303053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.303081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.303229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.303255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.303413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.303440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.275 qpair failed and we were unable to recover it. 00:33:41.275 [2024-07-24 09:19:19.303668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.275 [2024-07-24 09:19:19.303720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.303850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.303879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.304031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.304058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.304221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.304260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.304408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.304435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.304599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.304656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.304911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.304963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.305112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.305137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.305276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.305301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.305575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.305604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.305732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.305794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.305960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.305988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.306121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.306165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.306324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.306363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.306504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.306551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.306759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.306801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.306938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.306963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.307079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.307110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.307283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.307326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.307458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.307501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.307638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.307663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.307838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.307866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.308032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.308058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.308218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.308261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.308395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.308450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.308749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.308805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.308951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.308976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.309114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.309140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.309264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.309292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.309468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.309497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.309622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.309647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.309788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.309813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.309933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.309960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.310078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.310111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.310217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.310243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.310361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.310391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.310527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.310553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.310656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.276 [2024-07-24 09:19:19.310682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.276 qpair failed and we were unable to recover it. 00:33:41.276 [2024-07-24 09:19:19.310796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.310822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.310964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.310989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.311118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.311145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.311257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.311283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.311413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.311439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.311577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.311602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.311763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.311788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.311963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.312156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.312323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.312490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.312654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.312807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.312963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.312987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.313150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.313175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.313339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.313367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.313542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.313569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.313712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.313739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.313888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.313916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.314070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.314095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.314247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.314274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.314407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.314435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.314655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.314707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.314929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.314976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.315168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.315194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.315332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.315357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.315466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.315507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.315625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.315652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.315887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.315915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.316092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.316125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.316278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.316302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.316414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.316439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.277 qpair failed and we were unable to recover it. 00:33:41.277 [2024-07-24 09:19:19.316576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.277 [2024-07-24 09:19:19.316617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-07-24 09:19:19.316773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.278 [2024-07-24 09:19:19.316801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-07-24 09:19:19.317021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.278 [2024-07-24 09:19:19.317048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-07-24 09:19:19.317212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.278 [2024-07-24 09:19:19.317237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.278 qpair failed and we were unable to recover it. 00:33:41.278 [2024-07-24 09:19:19.317376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.278 [2024-07-24 09:19:19.317400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.317567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.317595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.317752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.317780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.317940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.317967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.318106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.318131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.318245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.318270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.318402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.318427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.318553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.318580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.318724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.318751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.318903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.318947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.319074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.319099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.319210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.319235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.319351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.319376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.319508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.319536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.319683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.319711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.319827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.319859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.320037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.320199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.320368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.320525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.320694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.320836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.320985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.321010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.321118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.321143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.321258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.321284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.321405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.321433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.321550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.560 [2024-07-24 09:19:19.321577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.560 qpair failed and we were unable to recover it. 00:33:41.560 [2024-07-24 09:19:19.321751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.321779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.321899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.321927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.322059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.322086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.322274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.322299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.322427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.322454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.322575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.322602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.322722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.322751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.322900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.322928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.323107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.323151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.323286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.323310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.323448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.323472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.323668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.323695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.323895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.323923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.324081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.324111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.324229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.324254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.324359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.324388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.324514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.324541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.324666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.324693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.324848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.324875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.325020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.325059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.325187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.325216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.325385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.325427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.325593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.325636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.325794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.325838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.325979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.326005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.326124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.326169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.326303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.326457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.326484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.326672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.326700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.326835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.326862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.326988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.327013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.327173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.327198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.327326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.327353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.327472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.327501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.327649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.327677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.327804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.327831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.327982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.328011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.561 [2024-07-24 09:19:19.328137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.561 [2024-07-24 09:19:19.328163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.561 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.328311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.328339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.328515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.328543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.328774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.328801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.328947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.328975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.329141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.329171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.329323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.329351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.329496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.329524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.329672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.329700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.329883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.329931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.330069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.330095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.330240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.330265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.330426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.330468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.330655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.330698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.330887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.330931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.331073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.331099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.331243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.331268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.331451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.331479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.331702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.331758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.331884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.331912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.332091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.332126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.332281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.332306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.332488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.332516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.332664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.332692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.332877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.332905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.333031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.333059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.333224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.333249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.333369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.333394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.333533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.333558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.333717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.333745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.333900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.333928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.334095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.334125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.334291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.334320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.334433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.334458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.334564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.334588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.334753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.334780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.334936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.334964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.335085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.335119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.562 [2024-07-24 09:19:19.335256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.562 [2024-07-24 09:19:19.335282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.562 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.335391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.335417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.335516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.335541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.335705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.335734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.335909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.335937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.336087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.336122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.336260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.336285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.336422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.336447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.336616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.336644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.336817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.336845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.337089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.337143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.337306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.337331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.337509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.337550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.337856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.337884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.337999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.338026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.338169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.338195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.338311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.338336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.338475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.338517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.338640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.338669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.338855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.338883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.339035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.339060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.339206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.339232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.339376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.339401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.339567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.339597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.339738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.339766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.339897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.339925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.340080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.340112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.340271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.340296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.340432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.340457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.340592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.340620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.340784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.340812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.340963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.340990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.341148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.341190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.341304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.341330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.341511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.341538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.341716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.341744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.341891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.341919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.342077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.342110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.563 [2024-07-24 09:19:19.342246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.563 [2024-07-24 09:19:19.342271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.563 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.342454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.342481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.342657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.342684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.342827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.342855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.343007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.343035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.343180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.343206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.343321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.343346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.343450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.343666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.343694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.343831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.343856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.344051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.344079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.344223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.344248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.344363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.344388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.344561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.344588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.344734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.344774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.344909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.344953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.345076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.345109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.345267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.345293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.345432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.345456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.345595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.345624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.345782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.345807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.345918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.345943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.346123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.346165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.346300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.346325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.346433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.346462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.346629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.346657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.346821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.346846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.346985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.347010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.347163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.347189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.347330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.347355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.347511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.347538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.564 [2024-07-24 09:19:19.347700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.564 [2024-07-24 09:19:19.347728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.564 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.347905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.347930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.348073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.348236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.348400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.348543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.348702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.348888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.348999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.349023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.349164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.349189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.349328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.349353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.349488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.349514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.349675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.349699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.349827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.349852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.350036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.350180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.350365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.350508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.350672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.350860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.350993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.351029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.351190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.351216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.351350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.351377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.351555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.351582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.351732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.351760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.351917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.351942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.352079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.352128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.352293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.352318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.352446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.352471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.352587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.352612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.352723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.352749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.352909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.352934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.353071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.353096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.353247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.353273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.353396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.353421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.353561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.353603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.353762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.353787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.353924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.353949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.565 [2024-07-24 09:19:19.354092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.565 [2024-07-24 09:19:19.354123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.565 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.354229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.354254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.354420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.354445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.354555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.354579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.354693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.354718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.354848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.354872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.355010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.355052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.355215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.355241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.355407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.355432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.355592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.355620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.355778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.355808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.355941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.355966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.356082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.356113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.356303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.356331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.356466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.356490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.356623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.356648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.356816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.356844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.357004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.357029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.357164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.357190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.357358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.357386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.357542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.357566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.357707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.357750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.357891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.357916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.358032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.358058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.358180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.358206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.358342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.358367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.358551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.358577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.358690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.358732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.358886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.358913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.359049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.359076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.359249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.359275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.359420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.359445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.359584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.359610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.359792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.359819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.359939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.359967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.360125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.360151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.360321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.360349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.360530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.360558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.566 [2024-07-24 09:19:19.360717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.566 [2024-07-24 09:19:19.360743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.566 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.360923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.360950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.361075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.361268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.361292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.361428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.361471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.361590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.361619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.361771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.361796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.361974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.362002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.362180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.362209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.362369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.362394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.362530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.362572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.362740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.362765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.362903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.362932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.363085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.363114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.363316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.363341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.363478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.363504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.363649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.363674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.363785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.363809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.363973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.363997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.364148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.364176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.364358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.364383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.364521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.364546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.364683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.364708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.364836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.364863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.365000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.365042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.365202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.365228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.365370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.365410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.365566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.365590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.365754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.365779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.365951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.365979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.366115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.366141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.366282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.366307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.366412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.366437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.366597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.366622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.366776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.366803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.366952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.366979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.567 [2024-07-24 09:19:19.367141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.567 [2024-07-24 09:19:19.367167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.567 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.367304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.367331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.367493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.367521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.367685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.367714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.367822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.367847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.367999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.368024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.368196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.368221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.368352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.368377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.368563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.368590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.368726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.368751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.368886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.368911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.369100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.369141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.369269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.369294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.369409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.369434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.369597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.369626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.369767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.369791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.369931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.369955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.370122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.370165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.370327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.370352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.370493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.370536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.370687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.370715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.370882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.370907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.371049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.371077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.371266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.371292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.371401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.371427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.371566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.371607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.371757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.371785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.371949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.371974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.372116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.372158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.372281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.372310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.372474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.372504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.372655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.568 [2024-07-24 09:19:19.372683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.568 qpair failed and we were unable to recover it. 00:33:41.568 [2024-07-24 09:19:19.372832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.372860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.373043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.373068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.373238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.373268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.373421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.373449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.373586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.373611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.373750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.373776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.373917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.373944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.374093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.374131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.374253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.374279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.374392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.374417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.374525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.374549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.374685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.374709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.374877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.374916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.375086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.375120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.375240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.375265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.375404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.375433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.375567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.375592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.375731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.375773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.375952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.376022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.376195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.376221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.376333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.376358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.376523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.376551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.376689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.376715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.376874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.376899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.377046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.377077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.377235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.377266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.377374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.377398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.377562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.377587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.377738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.377764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.377904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.377929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.378068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.378097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.569 [2024-07-24 09:19:19.378295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.569 [2024-07-24 09:19:19.378321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.569 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.378450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.378478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.378640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.378665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.378828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.378853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.379005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.379033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.379204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.379230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.379365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.379390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.379551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.379593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.379750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.379777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.379926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.379951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.380065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.380090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.380253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.380292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.380442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.380468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.380633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.380661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.380842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.380898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.381086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.381125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.381267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.381291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.381430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.381457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.381699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.381724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.381860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.381890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.382019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.382048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.382190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.382221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.382364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.382391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.382611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.382663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.382817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.382842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.382981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.383025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.383166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.383192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.383415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.383440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.383591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.383618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.383853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.383905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.384057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.384084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.384252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.570 [2024-07-24 09:19:19.384277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.570 qpair failed and we were unable to recover it. 00:33:41.570 [2024-07-24 09:19:19.384388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.384413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.384559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.384584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.384699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.384724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.384869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.384894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.385031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.385056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.385174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.385199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.385310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.385335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.385556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.385580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.385716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.385757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.385891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.385916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.386076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.386106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.386235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.386262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.386416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.386445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.386627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.386652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.386758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.386799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.386944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.386971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.387123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.387153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.387295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.387320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.387470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.387497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.387647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.387672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.387810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.387850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.388003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.388030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.388214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.388239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.388381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.388406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.388548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.388576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.388713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.388737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.388873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.388898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.389021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.389061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.389229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.389255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.389387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.389412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.389561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.389601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.389764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.389788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.389900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.389925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.390063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.390087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.390206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.571 [2024-07-24 09:19:19.390232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.571 qpair failed and we were unable to recover it. 00:33:41.571 [2024-07-24 09:19:19.390412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.390440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.390597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.390622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.390790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.390814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.390922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.390947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.391059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.391084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.391207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.391341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.391366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.391519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.391548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.391671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.391695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.391839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.391865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.392013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.392038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.392217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.392242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.392403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.392445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.392563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.392590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.392732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.392758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.392876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.392903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.393041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.393066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.393263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.393289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.393435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.393462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.393579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.393606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.393769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.393898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.393923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.394073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.394108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.394252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.394278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.394415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.394441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.394573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.394600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.394783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.394807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.394947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.394972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.395141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.395170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.395326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.395351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.395461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.395487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.395646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.395674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.395807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.395832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.395952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.572 [2024-07-24 09:19:19.395977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.572 qpair failed and we were unable to recover it. 00:33:41.572 [2024-07-24 09:19:19.396088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.396235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.396375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.396536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.396682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.396816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.396971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.396995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.397130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.397156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.397259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.397284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.397428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.397455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.397590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.397614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.397754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.397779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.397918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.397943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.398126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.398154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.398309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.398333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.398494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.398526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.398710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.398735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.398903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.398930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.399156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.399185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.399342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.399367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.399502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.399528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.399658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.399686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.399848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.399872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.400966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.400991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.401136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.401178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.401295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.401320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.401435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.573 [2024-07-24 09:19:19.401460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.573 qpair failed and we were unable to recover it. 00:33:41.573 [2024-07-24 09:19:19.401623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.401651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.401815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.401839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.401949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.401975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.402159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.402184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.402321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.402346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.402485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.402510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.402617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.402641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.402781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.402806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.402950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.402975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.403119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.403149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.403264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.403289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.403393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.403418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.403577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.403602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.403735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.403760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.403938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.403966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.404121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.404163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.404304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.404330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.404483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.404508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.404646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.404671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.404808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.404832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.404965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.405132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.405289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.405460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.405622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.405818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.405960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.405987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.406139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.406167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.406302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.406327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.406437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.406462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.406619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.406646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.406807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.406832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.574 [2024-07-24 09:19:19.406936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.574 [2024-07-24 09:19:19.406961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.574 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.407149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.407178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.407355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.407380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.407494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.407519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.407660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.407688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.407906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.407931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.408108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.408136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.408260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.408289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.408434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.408459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.408573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.408597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.408734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.408762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.408944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.408971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.409162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.409187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.409307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.409332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.409466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.409491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.409645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.409673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.409794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.409823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.409956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.409981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.410099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.410130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.410267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.410295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.410452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.410479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.410615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.410640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.410795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.410823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.411011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.411035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.411194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.411222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.411378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.411405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.411525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.411550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.575 [2024-07-24 09:19:19.411700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.575 [2024-07-24 09:19:19.411725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.575 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.411921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.411945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.412079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.412108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.412216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.412241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.412385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.412412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.412553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.412578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.412695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.412722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.412865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.412890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.413025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.413050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.413160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.413185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.413419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.413447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.413619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.413643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.413781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.413807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.413971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.413998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.414233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.414258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.414400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.414425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.414561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.414587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.414787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.414812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.414974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.415006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.415232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.415260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.415392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.415416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.415579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.415619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.415783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.415807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.415969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.415994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.416109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.416154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.416295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.416320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.416425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.416450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.416612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.416653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.416802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.416830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.416954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.416979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.417097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.417139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.417257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.417281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.417484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.417509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.417628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.417653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.417789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.417814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.417954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.417978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.418119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.418145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.418302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.576 [2024-07-24 09:19:19.418330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.576 qpair failed and we were unable to recover it. 00:33:41.576 [2024-07-24 09:19:19.418485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.418510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.418646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.418671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.418861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.418889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.419040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.419068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.419209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.419234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.419366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.419406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.419546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.419573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.419683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.419712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.419869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.419897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.420027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.420052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.420187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.420213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.420379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.420407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.420542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.420567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.420688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.420714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.420844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.420883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.421024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.421049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.421159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.421186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.421352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.421377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.421571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.421596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.421711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.421753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.421914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.421941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.422124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.422150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.422295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.422323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.422475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.422503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.422689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.422714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.422815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.422856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.422984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.423011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.423170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.423195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.423335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.423377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.423501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.423528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.423690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.423715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.423834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.423858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.424023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.424051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.424209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.424236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.424374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.424403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.424541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.424566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.424705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.424730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.424864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.577 [2024-07-24 09:19:19.424889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.577 qpair failed and we were unable to recover it. 00:33:41.577 [2024-07-24 09:19:19.425000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.425137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.425299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.425462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.425640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.425781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.425968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.425994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.426160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.426185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.426337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.426532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.426559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.426734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.426758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.426927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.426955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.427077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.427110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.427248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.427272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.427385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.427411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.427538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.427563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.427724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.427749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.427911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.427954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.428131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.428159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.428295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.428321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.428463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.428488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.428620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.428645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.428815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.428840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.428950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.428974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.429145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.429171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.429315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.429339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.429476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.429500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.429664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.429706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.429839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.429864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.429970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.429994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.430146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.430171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.430301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.430326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.430487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.430515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.430680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.430705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.430841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.430866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.431005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.431029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.431170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.431200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.431389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.578 [2024-07-24 09:19:19.431414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.578 qpair failed and we were unable to recover it. 00:33:41.578 [2024-07-24 09:19:19.431569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.431596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.431724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.431751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.431907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.431932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.432112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.432141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.432262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.432290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.432450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.432475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.432637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.432662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.432776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.432800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.432934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.432959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.433135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.433163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.433311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.433338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.433470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.433495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.433611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.433636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.433802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.433830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.433962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.433987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.434208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.434237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.434392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.434419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.434555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.434580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.434716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.434742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.434912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.434940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.435066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.435091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.435200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.435225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.435367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.435393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.435604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.435630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.435784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.435811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.435958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.435986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.436145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.436174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.436318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.436342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.436545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.436570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.436713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.436738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.436878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.436902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.437066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.579 [2024-07-24 09:19:19.437091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.579 qpair failed and we were unable to recover it. 00:33:41.579 [2024-07-24 09:19:19.437261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.437286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.437411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.437440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.437583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.437608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.437823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.437848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.437971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.437998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.438152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.438180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.438337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.438361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.438537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.438564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.438731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.438774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.438914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.438941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.439051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.439077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.439302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.439328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.439446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.439471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.439612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.439637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.439741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.439767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.439879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.439904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.440011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.440036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.440174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.440200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.440342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.440367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.440522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.440550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.440738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.440790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.440975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.441138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.441273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.441431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.441588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.441742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.441958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.441985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.442116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.442158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.442296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.442321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.442479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.442504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.442653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.442681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.442835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.442863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.443021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.443047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.443208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.443236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.443394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.443422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.443579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.443604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.443740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.580 [2024-07-24 09:19:19.443780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.580 qpair failed and we were unable to recover it. 00:33:41.580 [2024-07-24 09:19:19.443905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.443932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.444093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.444123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.444238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.444263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.444427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.444455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.444644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.444669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.444806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.444833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.444980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.445008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.445149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.445175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.445283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.445308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.445468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.445492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.445598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.445627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.445787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.445812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.445975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.446003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.446157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.446182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.446363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.446391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.446578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.446602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.446738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.446763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.446981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.447005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.447226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.447255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.447391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.447416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.447556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.447580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.447744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.447785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.447909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.447951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.448115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.448140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.448263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.448288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.448401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.448426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.448565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.448590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.448751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.448778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.448925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.448950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.449086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.449115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.449288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.449316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.449496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.449521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.449681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.449722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.449900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.449927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.450080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.450110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.450290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.450318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.450466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.450493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.450659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.450683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.581 qpair failed and we were unable to recover it. 00:33:41.581 [2024-07-24 09:19:19.450804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.581 [2024-07-24 09:19:19.450830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.450937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.450963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.451125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.451151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.451286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.451313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.451462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.451489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.451658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.451685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.451797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.451823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.451926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.451951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.452116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.452141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.452302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.452330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.452477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.452504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.452628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.452653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.452787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.452812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.452960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.452988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.453136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.453162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.453273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.453297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.453484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.453512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.453671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.453697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.453835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.453860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.454024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.454065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.454236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.454262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.454417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.454445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.454621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.454649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.454788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.454814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.454980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.455020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.455190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.455216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.455353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.455378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.455488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.455513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.455676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.455704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.455859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.455884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.456012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.456052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.456202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.456230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.456355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.456379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.456542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.456582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.456727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.456754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.456909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.456934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.457086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.457141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.457284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.457311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.582 [2024-07-24 09:19:19.457425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.582 [2024-07-24 09:19:19.457450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.582 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.457613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.457655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.457816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.457848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.458007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.458031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.458172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.458198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.458331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.458356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.458529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.458555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.458735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.458763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.458937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.458965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.459096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.459127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.459249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.459274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.459437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.459477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.459662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.459687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.459842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.459869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.459995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.460022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.460152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.460177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.460318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.460343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.460507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.460534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.460697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.460722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.460841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.460867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.461951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.461976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.462112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.462137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.462322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.462350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.462492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.462523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.462661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.462687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.462819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.462861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.463038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.463065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.463226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.463251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.463391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.463416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.463541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.463566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.463702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.463727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.463859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.463884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.583 [2024-07-24 09:19:19.464021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.583 [2024-07-24 09:19:19.464046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.583 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.464232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.464271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.464417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.464444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.464553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.464578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.464720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.464745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.464859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.464886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.464991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.465016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.465176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.465205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.465347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.465372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.465514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.465539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.465707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.465735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.465911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.465938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.466064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.466092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.466248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.466276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.466401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.466433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.466665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.466693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.466871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.466916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.467058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.467083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.467254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.467303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.467498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.467550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.467699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.467742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.467851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.467878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.468042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.468067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.468203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.468248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.468434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.468475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.468708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.468759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.468901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.468926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.469062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.469088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.469300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.469344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.469513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.469543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.469694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.469722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.469880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.469910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.470043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.584 [2024-07-24 09:19:19.470071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.584 qpair failed and we were unable to recover it. 00:33:41.584 [2024-07-24 09:19:19.470305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.470331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.470587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.470635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.470792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.470819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.470975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.471002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.471184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.471209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.471344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.471369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.471537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.471565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.471709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.471736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.471871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.471913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.472069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.472094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.472241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.472266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.472424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.472451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.472626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.472659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.472797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.472843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.473013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.473038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.473166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.473193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.473309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.473335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.473473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.473498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.473657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.473682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.473844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.473872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.474029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.474054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.474184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.474210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.474370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.474394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.474545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.474572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.474698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.474726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.474855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.474896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.475048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.475075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.475260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.475285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.475417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.475444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.475595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.475622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.475797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.475825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.475957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.475984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.476145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.476170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.476406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.476434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.476647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.476676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.476854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.476882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.477019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.477044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.477191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.477216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.585 qpair failed and we were unable to recover it. 00:33:41.585 [2024-07-24 09:19:19.477359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.585 [2024-07-24 09:19:19.477383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.477512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.477544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.477787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.477836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.477994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.478021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.478184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.478209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.478365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.478403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.478568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.478612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.478782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.478825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.478942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.478967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.479078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.479110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.479243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.479287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.479441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.479484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.479636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.479679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.479813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.479839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.479980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.480007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.480167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.480206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.480362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.480390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.480568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.480596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.480772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.480799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.480948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.480976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.481120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.481146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.481285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.481310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.481492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.481520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.481703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.481733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.481876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.481904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.482038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.482063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.482188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.482214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.482352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.482394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.482543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.482576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.482704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.482748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.482895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.482922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.483073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.483100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.483231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.483256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.483394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.483437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.483593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.483621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.483803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.483831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.484064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.484091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.484253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.484278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.586 [2024-07-24 09:19:19.484393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.586 [2024-07-24 09:19:19.484419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.586 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.484562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.484590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.484735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.484763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.484904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.484931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.485051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.485078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.485216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.485241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.485378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.485402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.485516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.485541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.485698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.485726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.485878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.485905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.486066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.486091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.486243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.486268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.486421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.486450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.486596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.486624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.486751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.486780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.486928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.486955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.487131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.487178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.487287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.487312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.487477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.487502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.487622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.487650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.487808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.487835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.487967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.487994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.488158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.488184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.488318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.488342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.488498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.488541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.488669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.488697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.488831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.488855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.489047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.489072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.489205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.489230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.489387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.489415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.489594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.489622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.489771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.489799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.489918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.489947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.490106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.490132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.490297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.490321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.490486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.490528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.490706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.490731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.490884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.490923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.587 [2024-07-24 09:19:19.491043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.587 [2024-07-24 09:19:19.491071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.587 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.491272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.491298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.491421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.491446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.491568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.491592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.491754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.491779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.491930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.491955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.492066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.492091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.492238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.492263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.492372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.492397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.492558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.492582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.492730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.492755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.492864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.492889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.493027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.493053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.493193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.493233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.493364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.493390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.493533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.493559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.493715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.493758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.493876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.493919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.494032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.494058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.494180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.494207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.494324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.494350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.494486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.494513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.494663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.494691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.494811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.494839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.495014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.495042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.495230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.495259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.495410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.495437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.495560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.495587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.495736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.495765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.495942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.495970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.496119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.496161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.496320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.496348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.496521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.496548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.496672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.496700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.496854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.496883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.497014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.497040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.497153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.497180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.497343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.497368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.497522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.588 [2024-07-24 09:19:19.497549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.588 qpair failed and we were unable to recover it. 00:33:41.588 [2024-07-24 09:19:19.497724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.497752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.497984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.498012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.498181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.498206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.498347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.498373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.498539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.498567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.498691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.498720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.498854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.498881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.499001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.499028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.499214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.499243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.499430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.499458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.499593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.499679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.499821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.499848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.499980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.500005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.500150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.500175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.500339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.500381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.500526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.500554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.500703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.500731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.500859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.500887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.501014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.501043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.501185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.501211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.501348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.501374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.501544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.501571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.501744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.501772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.501921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.501949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.502107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.502151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.502318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.502343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.502472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.502499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.502645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.502672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.502829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.502856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.503009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.503036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.503202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.503227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.503359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.503384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.589 qpair failed and we were unable to recover it. 00:33:41.589 [2024-07-24 09:19:19.503518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.589 [2024-07-24 09:19:19.503561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.503709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.503736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.503949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.503977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.504122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.504170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.504283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.504308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.504423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.504448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.504604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.504632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.504779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.504806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.504928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.504956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.505119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.505145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.505281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.505306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.505483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.505539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.505716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.505760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.505910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.505954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.506122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.506165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.506295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.506338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.506499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.506542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.506776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.506825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.506964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.506991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.507155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.507184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.507352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.507380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.507521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.507563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.507715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.507757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.507870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.507897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.508035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.508061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.508200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.508243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.508432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.508475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.508621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.508647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.508759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.508924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.508949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.509091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.509126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.509303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.509329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.509483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.509525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.509692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.509734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.509850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.509877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.510020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.510045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.510200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.510245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.590 [2024-07-24 09:19:19.510381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.590 [2024-07-24 09:19:19.510427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.590 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.510550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.510592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.510755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.510797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.510965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.510991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.511143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.511202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.511366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.511395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.511515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.511543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.511700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.511728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.511866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.511891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.512056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.512081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.512202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.512228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.512361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.512404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.512561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.512603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.512758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.512800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.512935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.512960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.513142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.513200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.513350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.513393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.513576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.513620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.513792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.513818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.513952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.513977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.514095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.514276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.514319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.514511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.514553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.514680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.514723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.514883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.514908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.515016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.515042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.515196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.515242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.515406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.515448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.515600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.515642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.515751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.515778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.515923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.515948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.516070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.516097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.516298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.516342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.516499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.516546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.516711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.516736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.516869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.516894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.517033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.517059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.517207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.517251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.591 [2024-07-24 09:19:19.517414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.591 [2024-07-24 09:19:19.517443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.591 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.517580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.517610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.517728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.517756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.517920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.517945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.518051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.518075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.518211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.518236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.518395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.518423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.518549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.518578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.518727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.518755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.518879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.518906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.519082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.519117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.519248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.519275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.519414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.519456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.519584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.519611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.519789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.519817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.519971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.519999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.520161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.520187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.520297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.520322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.520458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.520485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.520630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.520658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.520834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.520861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.521012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.521040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.521198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.521228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.521379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.521407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.521583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.521610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.521727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.521754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.521908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.521935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.522077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.522107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.522276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.522301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.522557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.522627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.522832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.522876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.523044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.523070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.523223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.523249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.523390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.523432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.523619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.523661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.523794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.523838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.523982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.524007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.524194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.592 [2024-07-24 09:19:19.524238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.592 qpair failed and we were unable to recover it. 00:33:41.592 [2024-07-24 09:19:19.524352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.524379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.524522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.524548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.524658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.524684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.524847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.524873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.524979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.525142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.525310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.525472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.525611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.525752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.525896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.525921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.526957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.526982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.527147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.527173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.527317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.527342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.527445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.527470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.527583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.527608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.527741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.527766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.527925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.527950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.528115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.528155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.528308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.528335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.528502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.528527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.528745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.528798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.528977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.529004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.529173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.529199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.529358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.529386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.529560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.593 [2024-07-24 09:19:19.529588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.593 qpair failed and we were unable to recover it. 00:33:41.593 [2024-07-24 09:19:19.529717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.529745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.529878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.529919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.530045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.530074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.530238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.530264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.530450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.530478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.530633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.530661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.530820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.530849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.531028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.531056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.531204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.531230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.531343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.531368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.531498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.531542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.531699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.531742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.531853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.531879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.532023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.532049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.532186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.532212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.532348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.532373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.532570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.532625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.532781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.532808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.532961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.532986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.533098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.533129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.533249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.533274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.533410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.533437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.533570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.533613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.533759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.533786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.533958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.533985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.534126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.534152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.534314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.534339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.534507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.534552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.534769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.534798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.534943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.534971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.535161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.535186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.535308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.535333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.535450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.535490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.535642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.535670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.535845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.535872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.536019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.536046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.536204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.536230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.536355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.536383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.594 qpair failed and we were unable to recover it. 00:33:41.594 [2024-07-24 09:19:19.536518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.594 [2024-07-24 09:19:19.536559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.536693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.536721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.536898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.536925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.537074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.537107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.537232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.537257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.537426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.537485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.537652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.537698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.537863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.537905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.538019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.538044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.538209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.538238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.538411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.538453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.538608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.538636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.538766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.538793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.538955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.538980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.539119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.539145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.539277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.539321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.539507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.539737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.539781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.539916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.539942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.540079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.540111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.540276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.540320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.540488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.540530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.540696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.540740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.540902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.540928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.541068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.541095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.541271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.541299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.541545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.541574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.541754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.541797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.541952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.541977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.542120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.542147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.542310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.542339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.542517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.542560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.542723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.542765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.542902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.542928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.543046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.543071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.543235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.543284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.543416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.543459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.543595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.543639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.595 qpair failed and we were unable to recover it. 00:33:41.595 [2024-07-24 09:19:19.543799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.595 [2024-07-24 09:19:19.543824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.543969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.543994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.544147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.544176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.544349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.544377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.544526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.544572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.544688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.544714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.544851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.544876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.544984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.545151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.545316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.545452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.545625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.545786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.545955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.545980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.546096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.546129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.546264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.546302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.546419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.546461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.546618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.546646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.546774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.546799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.546938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.546963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.547111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.547154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.547303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.547334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.547517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.547561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.547724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.547766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.547915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.547940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.548078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.548107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.548247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.548275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.548395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.548422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.548556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.548598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.548774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.548802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.548985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.549010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.549155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.549180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.549291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.549332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.549482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.549509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.549639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.549667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.549814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.549842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.550020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.550044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.550188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.550215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.596 qpair failed and we were unable to recover it. 00:33:41.596 [2024-07-24 09:19:19.550384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.596 [2024-07-24 09:19:19.550412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.550663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.550727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.550917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.550960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.551074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.551108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.551257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.551282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.551413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.551456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.551612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.551656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.551797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.551823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.551984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.552009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.552122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.552147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.552339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.552364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.552506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.552531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.552670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.552694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.552840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.552885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.553027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.553053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.553241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.553285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.553415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.553457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.553617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.553660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.553788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.553833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.553951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.553976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.554118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.554144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.554304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.554348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.554549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.554592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.554744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.554787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.554928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.554952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.555066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.555092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.555265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.555313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.555495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.555523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.555669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.555714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.555878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.555904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.556015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.556041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.556193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.556237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.556387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.556414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.556578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.556620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.556755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.556781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.556924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.556949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.557092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.597 [2024-07-24 09:19:19.557130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.597 qpair failed and we were unable to recover it. 00:33:41.597 [2024-07-24 09:19:19.557286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.557315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.557515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.557558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.557702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.557749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.557927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.557953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.558059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.558084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.558251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.558295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.558449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.558477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.558674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.558719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.558836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.558863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.559009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.559035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.559199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.559243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.559372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.559415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.559538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.559580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.559720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.559747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.559921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.559946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.560058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.560084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.560260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.560311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.560465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.560508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.560686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.560712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.560822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.560848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.560991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.561017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.561167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.561196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.561352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.561377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.561489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.561515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.561681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.561707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.561851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.561877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.562041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.562066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.562201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.562244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.562411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.562454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.562582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.562626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.562774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.562799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.562943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.562968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.598 qpair failed and we were unable to recover it. 00:33:41.598 [2024-07-24 09:19:19.563111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.598 [2024-07-24 09:19:19.563137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.563274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.563317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.563452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.563478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.563616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.563641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.563750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.563775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.563890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.563917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.564029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.564055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.564227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.564270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.564401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.564430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.564576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.564604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.564731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.564760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.564911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.564939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.565080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.565113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.565254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.565279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.565423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.565449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.565570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.565594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.565757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.565787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.565968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.565994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.566113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.566139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.566288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.566332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.566464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.566493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.566644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.566688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.566826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.566852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.566965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.566990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.567132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.567163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.567357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.567400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.567562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.567605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.567748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.567774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.567910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.567935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.568070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.568095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.568212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.568237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.568393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.568437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.568595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.568638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.568800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.568825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.568938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.599 [2024-07-24 09:19:19.568964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.599 qpair failed and we were unable to recover it. 00:33:41.599 [2024-07-24 09:19:19.569127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.569153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.569319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.569362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.569548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.569591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.569709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.569736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.569852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.569878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.570040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.570065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.570205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.570249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.570402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.570444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.570603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.570647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.570811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.570836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.570952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.570978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.571111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.571137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.571270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.571313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.571470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.571512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.571813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.571866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.571985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.572011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.572137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.572163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.572320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.572364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.572541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.572585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.572698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.572724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.572885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.572910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.573053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.573078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.573245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.573274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.573445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.573488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.573673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.573716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.573879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.573904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.574053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.574079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.574272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.574315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.574477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.574506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.574661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.574696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.574819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.574847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.574976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.575005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.575183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.575212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.575390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.575435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.575598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.575626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.575823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.575866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.576003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.576028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.576142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.600 [2024-07-24 09:19:19.576168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.600 qpair failed and we were unable to recover it. 00:33:41.600 [2024-07-24 09:19:19.576297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.576341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.576579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.576634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.576803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.576847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.576992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.577018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.577198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.577241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.577379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.577423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.577583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.577626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.577799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.577825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.577965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.577991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.578125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.578151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.578307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.578350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.578541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.578585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.578737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.578806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.578924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.578949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.579088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.579121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.579311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.579354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.579484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.579570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.579733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.579776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.579924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.579949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.580082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.580113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.580277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.580320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.580459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.580504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.580655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.580724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.580877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.580902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.581040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.581066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.581231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.581275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.581397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.581441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.581595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.581638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.581741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.581766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.581900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.581926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.582044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.582069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.582267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.582319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.582479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.582521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.582659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.582706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.601 qpair failed and we were unable to recover it. 00:33:41.601 [2024-07-24 09:19:19.582868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.601 [2024-07-24 09:19:19.582893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.583027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.583052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.583186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.583229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.583370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.583413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.583566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.583609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.583774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.583799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.583964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.583989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.584129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.584164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.584287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.584315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.584455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.584498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.584652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.584694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.584865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.584891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.585954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.585979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.586117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.586143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.586298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.586341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.586502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.586567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.586705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.586730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.586866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.586891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.587023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.587062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.587240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.587271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.587450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.587479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.587630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.587658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.587805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.587833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.588007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.588035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.588176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.588204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.588338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.588382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.588539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.588582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.588788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.588838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.588954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.588980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.589125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.589150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.589281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.589324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.589471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.589517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.589694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.589721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.602 qpair failed and we were unable to recover it. 00:33:41.602 [2024-07-24 09:19:19.589864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.602 [2024-07-24 09:19:19.589890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.589995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.590162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.590304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.590470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.590635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.590795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.590951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.590977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.591131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.591157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.591294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.591321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.591441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.591467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.591604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.591630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.591776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.591805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.591942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.591969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.592130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.592157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.592311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.592340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.592527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.592555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.592704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.592733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.592883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.592911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.593045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.593070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.593216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.593244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.593388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.593412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.593570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.593600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.593759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.593787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.593963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.593991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.594160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.594186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.594327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.594521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.594550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.594674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.594702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.594850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.603 [2024-07-24 09:19:19.594877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.603 qpair failed and we were unable to recover it. 00:33:41.603 [2024-07-24 09:19:19.595051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.595077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.595194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.595219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.595350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.595374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.595523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.595549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.595717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.595745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.595920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.595947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.596079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.596111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.596241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.596266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.596423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.596458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.596638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.596667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.596796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.596826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.596981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.597008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.597175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.597201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.597339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.597365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.597532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.597561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.597739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.597767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.597885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.597913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.598029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.598058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.598230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.598255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.598372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.598397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.598539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.598563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.598767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.598795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.598988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.599017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.599164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.599190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.599307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.599332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.599507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.599534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.599727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.599755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.599993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.600021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.600171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.600197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.600316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.600342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.600491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.600516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.600624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.600648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.600808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.600838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.601040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.601235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.601389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.601536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.601678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.601846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.604 [2024-07-24 09:19:19.601976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.604 [2024-07-24 09:19:19.602004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.604 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.602130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.602172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.602309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.602335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.602481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.602505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.602613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.602639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.602778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.602803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.602966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.602995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.603172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.603198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.603341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.603367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.603511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.603541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.603681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.603706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.603879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.603907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.604084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.604114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.604267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.604292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.604449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.604476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.604636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.604662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.604814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.604840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.604971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.604996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.605169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.605194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.605338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.605363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.605554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.605582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.605738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.605762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.605901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.605943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.606143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.606169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.606286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.606313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.606463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.606487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.606650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.606693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.606860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.606886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.607037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.607064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.607209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.607234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.607376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.607402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.607585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.607612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.607789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.607815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.607948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.607974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.608154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.608180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.608291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.608316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.608473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.608512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.608675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.608719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.608881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.605 [2024-07-24 09:19:19.608925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.605 qpair failed and we were unable to recover it. 00:33:41.605 [2024-07-24 09:19:19.609060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.609085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.609206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.609232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.609334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.609359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.609525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.609568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.609706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.609750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.609868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.609894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.610014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.610042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.610196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.610222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.610359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.610386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.610536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.610564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.610718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.610747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.610881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.610908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.611063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.611089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.611236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.611262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.611394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.611422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.611658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.611686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.611842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.611870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.612001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.612025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.612167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.612193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.612317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.612343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.612499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.612526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.612744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.612772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.612923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.612951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.613091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.613124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.613271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.613296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.613455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.613483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.613662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.613690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.613839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.613866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.614023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.614050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.614188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.614214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.614357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.614382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.614566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.614594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.614765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.614789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.615051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.615078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.615219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.615244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.615380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.615424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.615606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.615633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.615756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.615788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.615919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.615947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.616110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.616137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.606 [2024-07-24 09:19:19.616274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.606 [2024-07-24 09:19:19.616299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.606 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.616444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.616473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.616627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.616655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.616811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.616838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.616988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.617016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.617164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.617192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.617333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.617357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.617492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.617520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.617673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.617701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.617849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.617876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.618030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.618057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.618241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.618267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.618374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.618398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.618512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.618536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.618711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.618739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.618858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.618885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.619036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.619063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.619227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.619266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.619414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.619441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.619578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.619621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.619747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.619790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.619953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.619978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.620085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.620119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.620242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.620268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.620384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.620410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.620541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.620566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.620701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.620726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.620887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.620912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.621044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.621199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.621383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.621570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.621717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.621870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.621993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.622020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.622172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.622217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.622368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.607 [2024-07-24 09:19:19.622410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.607 qpair failed and we were unable to recover it. 00:33:41.607 [2024-07-24 09:19:19.622552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.622598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.622782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.622824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.622959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.622984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.623138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.623167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.623341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.623384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.623568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.623611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.623743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.623768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.623934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.623961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.624153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.624182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.624357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.624385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.624586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.624613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.624770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.624797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.624946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.624974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.625120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.625165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.625333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.625358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.625494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.625521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.625741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.625769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.625908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.625951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.626107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.626136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.626267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.626293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.626405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.626430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.626590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.626617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.626785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.626814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.626946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.626972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.627140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.627165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.627295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.627324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.627497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.627525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.608 qpair failed and we were unable to recover it. 00:33:41.608 [2024-07-24 09:19:19.627682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.608 [2024-07-24 09:19:19.627710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.627881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.627908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.628044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.628069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.628225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.628252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.628381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.628409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.628596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.628624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.628756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.628784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.628948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.628976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.629167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.629193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.629333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.629359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.629523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.629551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.629700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.629728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.629849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.629877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.630048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.630081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.630229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.630255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.630371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.630413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.630579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.630621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.630786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.630814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.630969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.630999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.631154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.631181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.631325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.631352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.631521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.631549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.631698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.631728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.631876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.631905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.632048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.632076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.632228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.632255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.632439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.632468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.632597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.632625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.632751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.632779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.632910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.632939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.633094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.633130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.633305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.633331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.633480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.633508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.633689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.633745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.634002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.634031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.634193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.634221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.634342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.634368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.609 [2024-07-24 09:19:19.634524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.609 [2024-07-24 09:19:19.634553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.609 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.634704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.634732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.634889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.634919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.635115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.635174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.635298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.635326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.635503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.635535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.635728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.635772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.635909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.635937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.636120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.636172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.636340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.636384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.636545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.636588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.636795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.636842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.636983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.637011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.637125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.637160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.637319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.637348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.637471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.637500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.637653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.637690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.637977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.638044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.638220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.638248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.638408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.638451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.638632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.638678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.638837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.638864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.638976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.639001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.639160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.639203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.639359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.639389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.639544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.639572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.639701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.639731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.639897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.639923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.640056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.640081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.640218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.640246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.640405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.640448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.640608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.640638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.640821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.640868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.641046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.641071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.641204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.641230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.641372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.641418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.641674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.641726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.641878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.641907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.610 [2024-07-24 09:19:19.642059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.610 [2024-07-24 09:19:19.642087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.610 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.642229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.642254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.642413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.642441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.642595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.642623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.642847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.642879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.643078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.643119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.643254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.643279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.643413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.643439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.643590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.643618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.643767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.643796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.643980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.644008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.644166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.644192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.644325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.644350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.644498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.644524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.644659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.644701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.644861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.644889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.645048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.645073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.645218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.645243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.645359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.645399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.645560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.645585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.645723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.645764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.645908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.645936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.646091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.646121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.646236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.646263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.646400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.646425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.646612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.646640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.646771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.646799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.646981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.647008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.647158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.647184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.647317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.647342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.647471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.647499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.647652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.647680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.647827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.647873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.647998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.648025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.648163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.648189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.648300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.648325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.648443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.648468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.648605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.648634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.611 [2024-07-24 09:19:19.648782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.611 [2024-07-24 09:19:19.648810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.611 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.648958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.648986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.649163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.649202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.649372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.649399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.649580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.649637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.649844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.649898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.650017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.650043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.650179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.650212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.650409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.650473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.650628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.650656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.650781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.650808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.650959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.650987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.651152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.651178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.651318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.651343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.651536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.651563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.651692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.651732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.612 [2024-07-24 09:19:19.651908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.612 [2024-07-24 09:19:19.651936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.612 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.652061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.652086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.652269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.652295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.652410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.652435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.652567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.652595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.652718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.652751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.652931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.652958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.901 qpair failed and we were unable to recover it. 00:33:41.901 [2024-07-24 09:19:19.653114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.901 [2024-07-24 09:19:19.653151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.653264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.653289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.653409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.653435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.653589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.653617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.653741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.653769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.653937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.653993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.654164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.654192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.654335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.654362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.654478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.654503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.654660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.654688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.654833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.654861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.655964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.655992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.656154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.656180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.656290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.656316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.656441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.656466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.656622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.656649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.656765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.656793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.656941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.656969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.657117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.657143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.657275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.657304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.657433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.657461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.657592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.657635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.657788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.657816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.657967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.657995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.658146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.658172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.658282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.658307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.658415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.658440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.658568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.658595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.658771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.658799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.658944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.658972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.659113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.659138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.902 [2024-07-24 09:19:19.659241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.902 [2024-07-24 09:19:19.659266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.902 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.659430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.659455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.659655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.659683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.659836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.659865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.660012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.660039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.660201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.660227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.660405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.660433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.660718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.660781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.660935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.660963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.661125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.661167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.661275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.661299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.661439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.661464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.661624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.661652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.661803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.661831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.661975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.662000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.662134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.662160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.662326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.662352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.662538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.662564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.662700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.662742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.662908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.662933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.663082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.663113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.663244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.663273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.663421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.663449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.663611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.663636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.663820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.663848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.664000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.664044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.664187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.664212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.664352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.664377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.664515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.664556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.664720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.664745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.664886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.664929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.665075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.665110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.665277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.665302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.665470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.665498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.665633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.665658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.665803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.665828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.665942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.665983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.666129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.666170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.903 qpair failed and we were unable to recover it. 00:33:41.903 [2024-07-24 09:19:19.666333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.903 [2024-07-24 09:19:19.666358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.666512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.666540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.666703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.666735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.666937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.666962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.667075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.667122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.667276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.667304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.667469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.667494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.667632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.667658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.667827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.667855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.667984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.668009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.668122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.668147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.668312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.668340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.668471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.668496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.668632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.668657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.668791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.668819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.668978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.669004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.669147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.669189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.669333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.669361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.669521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.669550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.669685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.669726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.669880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.669908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.670061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.670087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.670216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.670242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.670379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.670404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.670522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.670547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.670711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.670736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.670863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.670890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.671053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.671078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.671224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.671249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.671412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.671439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.671595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.671620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.671726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.671751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.671925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.671951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.672114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.672156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.672320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.672344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.672528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.672556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.672699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.672724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.904 [2024-07-24 09:19:19.672835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.904 [2024-07-24 09:19:19.672860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.904 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.672995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.673035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.673177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.673203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.673352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.673377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.673540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.673568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.673726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.673751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.673872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.673897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.674015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.674040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.674204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.674233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.674359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.674387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.674528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.674555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.674740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.674765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.674899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.674924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.675031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.675056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.675190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.675215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.675328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.675369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.675514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.675542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.675720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.675745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.675907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.675947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.676081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.676113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.676276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.676301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.676463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.676504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.676655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.676683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.676806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.676831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.676971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.676996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.677164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.677193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.677353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.677378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.677517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.677560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.677710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.677738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.677912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.677937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.678052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.678094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.678265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.678291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.678451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.678476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.678658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.678685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.678842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.678870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.679020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.679047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.679221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.679248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.679416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.679441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.905 [2024-07-24 09:19:19.679544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.905 [2024-07-24 09:19:19.679569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.905 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.679729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.679754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.679907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.679935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.680072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.680097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.680241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.680266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.680424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.680451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.680613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.680638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.680779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.680821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.680973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.681001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.681135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.681161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.681303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.681329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.681489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.681517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.681675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.681700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.681836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.681861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.681996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.682022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.682201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.682226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.682363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.682405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.682527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.682554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.682692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.682717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.682905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.682933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.683086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.683120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.683285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.683311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.683421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.683446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.683595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.683621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.683758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.683783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.683944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.683971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.684157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.684183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.684315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.684339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.684480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.684522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.684671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.684699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.906 [2024-07-24 09:19:19.684880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.906 [2024-07-24 09:19:19.684905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.906 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.685065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.685092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.685232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.685258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.685395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.685420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.685580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.685605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.685760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.685802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.685958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.685984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.686144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.686188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.686364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.686397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.686558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.686583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.686696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.686737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.686887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.686915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.687106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.687132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.687288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.687315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.687445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.687473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.687633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.687658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.687776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.687817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.687941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.687969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.688127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.688162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.688304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.688330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.688507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.688532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.688647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.688673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.688817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.688860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.689014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.689042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.689211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.689237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.689381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.689425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.689568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.689596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.689722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.689748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.689884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.689910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.690079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.690127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.690317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.690342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.690506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.690533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.690695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.690720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.690825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.690850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.690984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.691009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.691139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.691171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.691312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.691337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.691489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.691516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.907 qpair failed and we were unable to recover it. 00:33:41.907 [2024-07-24 09:19:19.691637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.907 [2024-07-24 09:19:19.691666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.691820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.691845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.691985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.692028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.692186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.692212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.692347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.692372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.692567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.692595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.692721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.692749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.692937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.692962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.693109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.693135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.693239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.693264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.693425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.693450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.693607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.693635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.693781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.693809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.693964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.693989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.694122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.694164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.694310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.694337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.694517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.694541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.694676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.694704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.694877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.694905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.695030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.695055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.695197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.695222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.695359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.695399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.695582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.695606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.695790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.695817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.695948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.695982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.696174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.696201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.696356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.696384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.696528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.696556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.696692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.696717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.696854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.696879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.697042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.697239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.697404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.697565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.697693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.697834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.697992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.698020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.698198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.908 [2024-07-24 09:19:19.698223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.908 qpair failed and we were unable to recover it. 00:33:41.908 [2024-07-24 09:19:19.698334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.698359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.698499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.698524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.698689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.698713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.698872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.698915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.699034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.699063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.699232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.699258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.699413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.699440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.699584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.699612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.699732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.699757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.699894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.699919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.700080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.700115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.700251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.700277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.700438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.700479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.700659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.700686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.700850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.700875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.701053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.701081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.701220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.701249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.701387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.701412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.701543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.701568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.701717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.701745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.701935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.701961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.702072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.702096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.702250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.702274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.702460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.702485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.702634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.702662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.702788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.702815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.702960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.702985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.703118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.703156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.703280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.703307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.703426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.703451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.703590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.703616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.703754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.703780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.703923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.703948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.704079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.704116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.704294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.704319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.704489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.704514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.704630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.704656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.909 [2024-07-24 09:19:19.704789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.909 [2024-07-24 09:19:19.704814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.909 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.704948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.704973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.705145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.705189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.705350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.705380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.705519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.705546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.705682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.705707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.705882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.705907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.706040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.706065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.706229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.706267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.706384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.706410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.706578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.706604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.706742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.706768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.706918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.706959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.707119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.707145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.707270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.707296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.707475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.707500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.707661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.707686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.707805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.707832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.707970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.707995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.708124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.708150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.708285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.708310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.708422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.708447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.708623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.708648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.708793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.708820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.708955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.708979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.709169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.709195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.709358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.709399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.709549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.709576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.709757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.709782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.709935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.709963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.710125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.710168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.710291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.710316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.710449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.710474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.710639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.710665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.710811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.710836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.710955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.710983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.711150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.711177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.711316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.711341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.910 qpair failed and we were unable to recover it. 00:33:41.910 [2024-07-24 09:19:19.711478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.910 [2024-07-24 09:19:19.711503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.711615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.711640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.711752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.711778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.711962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.711991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.712137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.712163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.712322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.712347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.712504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.712532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.712693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.712721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.712901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.712926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.713040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.713081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.713235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.713260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.713420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.713445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.713584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.713609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.713752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.713791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.713968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.713996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.714164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.714190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.714354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.714379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.714484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.714509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.714686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.714727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.714880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.714908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.715070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.715095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.715245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.715270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.715408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.715448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.715611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.715636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.715790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.715815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.715952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.715993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.716141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.716167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.716304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.716329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.716463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.716491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.716628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.716653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.716793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.716818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.716979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.911 [2024-07-24 09:19:19.717007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.911 qpair failed and we were unable to recover it. 00:33:41.911 [2024-07-24 09:19:19.717140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.717166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.717295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.717334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.717544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.717571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.717717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.717743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.717925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.717953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.718127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.718153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.718318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.718343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.718509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.718566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.718718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.718747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.718900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.718926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.719067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.719122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.719278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.719303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.719415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.719440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.719584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.719609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.719718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.719744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.719908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.719933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.720090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.720140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.720251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.720276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.720417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.720442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.720581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.720607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.720770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.720798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.720948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.720973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.721134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.721191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.721351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.721380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.721568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.721593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.721753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.721799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.721969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.721994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.722110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.722136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.722282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.722307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.722452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.722476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.722628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.722655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.722801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.722826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.722969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.722997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.723137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.723164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.723329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.723355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.723542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.723569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.723704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.723729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.912 [2024-07-24 09:19:19.723870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.912 [2024-07-24 09:19:19.723895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.912 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.724065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.724090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.724218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.724243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.724426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.724454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.724579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.724606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.724784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.724809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.724925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.724949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.725169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.725212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.725365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.725390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.725506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.725531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.725649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.725673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.725836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.725861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.726044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.726071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.726221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.726247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.726359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.726384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.726548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.726572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.726710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.726735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.726885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.726909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.727086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.727117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.727227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.727252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.727371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.727396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.727539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.727568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.727699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.727727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.727882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.727907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.728044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.728086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.728275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.728300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.728443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.728468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.728660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.728688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.728836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.728864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.729019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.729044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.729185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.729229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.729346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.729373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.729546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.729571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.729689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.729714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.729871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.729899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.730060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.730085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.730245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.730270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.730489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.913 [2024-07-24 09:19:19.730514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.913 qpair failed and we were unable to recover it. 00:33:41.913 [2024-07-24 09:19:19.730634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.730659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.730802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.730827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.730999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.731026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.731166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.731192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.731334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.731360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.731552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.731577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.731745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.731770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.731927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.732138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.732166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.732320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.732345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.732458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.732483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.732627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.732652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.732800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.732824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.732964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.732989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.733133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.733159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.733298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.733473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.733501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.733615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.733642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.733767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.733792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.733932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.733974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.734168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.734194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.734351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.734376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.734491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.734533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.734708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.734735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.734867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.734891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.735007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.735032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.735199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.735240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.735424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.735448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.735563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.735588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.735729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.735756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.735868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.735893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.736046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.736071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.736255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.736283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.736420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.736446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.736582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.736612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.736779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.736807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.736969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.736994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.914 [2024-07-24 09:19:19.737114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.914 [2024-07-24 09:19:19.737140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.914 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.737282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.737308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.737474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.737499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.737607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.737632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.737763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.737791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.737943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.737969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.738138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.738167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.738286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.738315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.738461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.738488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.738648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.738689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.738869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.738898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.739084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.739121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.739277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.739303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.739442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.739470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.739651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.739676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.739834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.739863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.740006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.740033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.740199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.740225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.740344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.740387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.740575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.740599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.740734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.740759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.740893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.740935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.741057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.741085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.741276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.741301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.741416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.741459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.741593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.741621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.741780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.741804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.741988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.742015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.742150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.742178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.742360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.742384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.742525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.742549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.742693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.742736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.915 [2024-07-24 09:19:19.742878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.915 [2024-07-24 09:19:19.742903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.915 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.743064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.743088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.743224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.743252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.743390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.743414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.743518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.743542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.743697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.743725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.743899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.743923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.744067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.744091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.744210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.744234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.744375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.744399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.744537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.744562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.744698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.744723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.744862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.744886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.745933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.745959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.746142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.746169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.746318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.746343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.746481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.746505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.746624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.746650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.746787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.746813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.746926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.746950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.747066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.747091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.747212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.747237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.747391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.747416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.747609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.747637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.747778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.747804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.747951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.747993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.748124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.748153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.748311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.748339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.748482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.748526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.748671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.748695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.748833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.748857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.748994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.749035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.916 [2024-07-24 09:19:19.749159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.916 [2024-07-24 09:19:19.749188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.916 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.749354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.749379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.749486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.749510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.749701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.749729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.749890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.749916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.750069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.750096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.750298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.750324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.750487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.750511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.750646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.750674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.750848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.750876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.751022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.751050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.751242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.751268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.751382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.751407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.751547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.751572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.751733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.751775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.751901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.751928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.752080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.752110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.752229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.752254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.752364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.752389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.752550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.752575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.752725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.752751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.752897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.752922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.753063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.753213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.753372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.753512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.753697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.753853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.753980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.754119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.754280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.754441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.754625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.754781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.754969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.754993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.755128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.755154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.755273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.755297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.755406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.755430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.917 qpair failed and we were unable to recover it. 00:33:41.917 [2024-07-24 09:19:19.755574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.917 [2024-07-24 09:19:19.755598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.755759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.755785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.755965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.755991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.756124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.756150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.756292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.756316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.756456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.756480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.756616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.756642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.756801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.756828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.756988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.757012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.757148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.757192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.757336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.757363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.757520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.757548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.757714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.757742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.757890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.757917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.758081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.758112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.758229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.758270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.758394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.758421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.758589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.758614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.758747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.758788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.758962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.758989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.759143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.759169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.759306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.759330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.759466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.759490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.759629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.759652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.759792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.759817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.760012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.760039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.760179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.760204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.760362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.760403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.760520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.760548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.760702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.760726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.760865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.760907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.761039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.761067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.761234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.761260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.761396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.761420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.761559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.761586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.761751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.761776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.761929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.761956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.762115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.918 [2024-07-24 09:19:19.762156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.918 qpair failed and we were unable to recover it. 00:33:41.918 [2024-07-24 09:19:19.762282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.762307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.762452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.762477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.762586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.762610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.762750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.762774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.762931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.762971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.763146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.763174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.763331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.763355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.763474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.763498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.763660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.763684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.763792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.763816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.763925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.763950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.764115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.764144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.764306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.764331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.764443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.764467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.764642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.764670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.764826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.764850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.764988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.765012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.765149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.765175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.765341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.765366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.765552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.765578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.765741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.765769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.765900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.765924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.766088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.766118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.766293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.766317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.766461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.766486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.766624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.766649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.766820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.766847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.767023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.767051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.767212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.767238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.767374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.767398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.767542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.767567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.767718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.767745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.767859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.767886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.768046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.768073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.768218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.768243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.768398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.768425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.768582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.768607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.919 [2024-07-24 09:19:19.768750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.919 [2024-07-24 09:19:19.768793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.919 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.768948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.768976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.769127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.769153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.769318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.769360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.769528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.769558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.769721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.769745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.769884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.769909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.770056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.770083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.770244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.770268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.770414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.770438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.770578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.770603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.770775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.770799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.770912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.770937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.771048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.771073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.771184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.771209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.771368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.771393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.771561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.771588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.771712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.771736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.771902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.771945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.772109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.772153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.772285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.772310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.772446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.772490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.772643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.772671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.772827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.772851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.772969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.772994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.773151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.773181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.773335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.773360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.773468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.773492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.773635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.773663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.773850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.773875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.773991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.774016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.774154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.774184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.920 [2024-07-24 09:19:19.774318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.920 [2024-07-24 09:19:19.774343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.920 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.774521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.774549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.774689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.774717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.774880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.774905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.775023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.775048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.775187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.775213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.775346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.775370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.775506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.775531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.775671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.775712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.775901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.775926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.776045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.776071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.776217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.776244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.776434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.776459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.776595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.776620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.776761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.776786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.776978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.777006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.777171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.777197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.777338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.777363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.777505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.777532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.777696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.777739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.777870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.777897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.778080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.778110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.778241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.778270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.778418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.778446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.778602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.778627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.778792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.778983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.779008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.779180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.779206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.779366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.779394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.779546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.779573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.779735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.779760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.779873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.779898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.780073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.780107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.780266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.780291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.780411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.780435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.780573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.780599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.780735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.780760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.780925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.921 [2024-07-24 09:19:19.780950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.921 qpair failed and we were unable to recover it. 00:33:41.921 [2024-07-24 09:19:19.781094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.781131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.781246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.781272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.781391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.781417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.781563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.781589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.781705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.781732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.781868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.781893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.782052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.782077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.782219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.782244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.782428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.782455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.782604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.782632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.782795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.782820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.782954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.782996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.783157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.783183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.783297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.783323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.783453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.783478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.783669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.783697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.783884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.783908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.784021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.784046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.784185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.784211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.784352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.784376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.784488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.784528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.784650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.784678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.784832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.784856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.785031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.785059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.785187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.785215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.785348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.785372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.785513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.785538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.785744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.785768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.785907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.785932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.786086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.786124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.786282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.786310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.786488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.786512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.786705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.786732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.786882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.786909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.787070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.787095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.787244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.787269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.787460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.787487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.922 [2024-07-24 09:19:19.787640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.922 [2024-07-24 09:19:19.787664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.922 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.787776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.787801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.787940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.787965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.788080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.788119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.788265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.788289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.788451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.788477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.788638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.788663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.788824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.788869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.789013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.789040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.789194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.789220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.789361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.789386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.789555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.789582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.789740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.789765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.789870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.789894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.790064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.790091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.790242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.790267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.790438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.790463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.790658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.790683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.790821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.790845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.790957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.790986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.791193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.791219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.791353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.791377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.791515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.791556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.791711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.791741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.791901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.791925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.792065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.792111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.792250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.792278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.792435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.792460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.792599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.792623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.792755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.792780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.792929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.792953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.793094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.793143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.793295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.793322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.793464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.793489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.793649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.793673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.793806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.793835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.794020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.794045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.794162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.794187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.923 qpair failed and we were unable to recover it. 00:33:41.923 [2024-07-24 09:19:19.794304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.923 [2024-07-24 09:19:19.794329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.794465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.794489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.794607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.794633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.794779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.794804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.795048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.795075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.795245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.795269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.795413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.795455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.795616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.795642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.795830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.795862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.796018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.796044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.796202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.796228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.796336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.796361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.796535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.796560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.796718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.796741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.796867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.796895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.797027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.797054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.797238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.797262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.797378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.797402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.797542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.797567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.797727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.797752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.797893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.797937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.798093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.798129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.798274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.798299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.798434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.798458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.798626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.798650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.798768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.798793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.798929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.798953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.799069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.799094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.799239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.799264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.799410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.799438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.799564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.799604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.799737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.799761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.799940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.799966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.800126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.800154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.800305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.800330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.800448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.800472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.800640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.800667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.800798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.924 [2024-07-24 09:19:19.800823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.924 qpair failed and we were unable to recover it. 00:33:41.924 [2024-07-24 09:19:19.800982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.801008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.801168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.801193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.801360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.801385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.801504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.801530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.801657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.801684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.801841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.801866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.802011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.802035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.802174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.802198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.802338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.802363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.802478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.802519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.802675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.802701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.802890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.802915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.803033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.803057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.803225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.803250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.803388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.803412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.803572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.803599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.803757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.803781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.803897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.803924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.804059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.804083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.804216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.804242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.804406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.804432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.804572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.804598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.804735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.804764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.804908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.804932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.805066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.805092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.805235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.805263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.805428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.805454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.805571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.805610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.805790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.805818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.805996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.806024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.806190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.806215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.806338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.925 [2024-07-24 09:19:19.806363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.925 qpair failed and we were unable to recover it. 00:33:41.925 [2024-07-24 09:19:19.806503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.806528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.806638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.806663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.806803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.806827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.806964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.806989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.807126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.807150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.807268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.807292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.807421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.807449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.807589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.807614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.807795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.807820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.807933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.807957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.808095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.808137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.808299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.808327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.808457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.808481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.808608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.808633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.808784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.808812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.808973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.808998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.809137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.809161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.809350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.809377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.809532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.809557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.809697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.809739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.809923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.809951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.810120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.810149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.810275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.810300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.810435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.810462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.810631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.810656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.810773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.810798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.810927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.810951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.811067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.811092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.811208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.811233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.811390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.811418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.811548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.811573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.811735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.811776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.811901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.811928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.812083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.812117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.812258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.812283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.812420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.812444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.812592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.812617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.812778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.926 [2024-07-24 09:19:19.812803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.926 qpair failed and we were unable to recover it. 00:33:41.926 [2024-07-24 09:19:19.812919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.812943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.813091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.813121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.813260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.813284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.813397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.813422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.813538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.813562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.813724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.813748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.813905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.813930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.814092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.814142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.814303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.814327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.814481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.814509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.814663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.814688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.814826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.814866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.815014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.815040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.815226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.815253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.815410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.815438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.815567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.815594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.815763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.815788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.815945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.815973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.816163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.816189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.816303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.816328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.816467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.816491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.816652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.816676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.816820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.816844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.816982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.817009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.817131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.817158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.817297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.817322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.817438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.817463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.817624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.817651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.817819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.817843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.817978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.818003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.818198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.818223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.818366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.818391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.818551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.818579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.818730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.818759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.818896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.818920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.819058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.819083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.819268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.819296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.927 [2024-07-24 09:19:19.819494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.927 [2024-07-24 09:19:19.819519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.927 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.819637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.819663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.819830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.819870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.820017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.820045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.820200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.820225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.820368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.820407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.820538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.820563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.820703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.820727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.820896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.820938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.821072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.821097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.821269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.821480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.821507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.821644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.821669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.821808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.821833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.821968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.821992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.822128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.822154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.822289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.822314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.822477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.822505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.822648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.822673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.822813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.822838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.823003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.823043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.823154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.823180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.823315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.823341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.823481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.823506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.823667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.823692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.823846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.823874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.824050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.824234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.824404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.824542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.824704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.824868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.824990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.825182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.825312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.825495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.825649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.825774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.928 [2024-07-24 09:19:19.825937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.928 [2024-07-24 09:19:19.825962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.928 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.826117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.826143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.826253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.826278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.826413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.826437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.826551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.826575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.826711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.826736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.826877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.826901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.827041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.827066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.827186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.827211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.827351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.827395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.827525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.827550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.827662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.827686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.827850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.827877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.828036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.828218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.828401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.828559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.828724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.828867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.828979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.829003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.829140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.829181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.829345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.829373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.829526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.829551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.829689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.829713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.829850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.829875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.830019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.830061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.830201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.830228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.830343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.830368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.830549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.830574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.830760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.830789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.830954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.830979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.831140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.831166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.831326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.929 [2024-07-24 09:19:19.831353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.929 qpair failed and we were unable to recover it. 00:33:41.929 [2024-07-24 09:19:19.831485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.831512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.831670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.831694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.831803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.831828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.832018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.832045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.832176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.832201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.832365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.832390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.832559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.832583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.832725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.832749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.832903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.832931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.833081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.833120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.833293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.833318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.833450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.833474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.833610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.833635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.833769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.833792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.833898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.833922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.834057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.834081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.834235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.834274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.834449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.834476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.834620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.834645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.834806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.834849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.834969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.834996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.835152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.835182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.835330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.835359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.835495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.835522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.835668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.835695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.835858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.835886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.836034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.836061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.836249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.836275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.836460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.836516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.836654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.836681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.836845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.836873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.837020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.837046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.837208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.837233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.837391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.837419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.837576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.837604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.837776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.837803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.837947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.930 [2024-07-24 09:19:19.837978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.930 qpair failed and we were unable to recover it. 00:33:41.930 [2024-07-24 09:19:19.838133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.838178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.838342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.838383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.838513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.838540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.838697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.838724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.838893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.838917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.839053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.839080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.839245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.839270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.839401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.839429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.839546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.839573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.839741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.839766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.839928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.839955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.840098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.840133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.840283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.840307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.840461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.840488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.840651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.840692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.840935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.840962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.841123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.841148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.841306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.841330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.841459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.841498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.841692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.841719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.841853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.841899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.842050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.842075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.842252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.842276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.842422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.842450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.842630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.842658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.842807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.842835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.842984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.843011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.843174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.843199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.843334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.843358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.843550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.843576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.843703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.843730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.843853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.843880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.844019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.844043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.844178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.844218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.844355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.844385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.844562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.844609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.844770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.844813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.931 qpair failed and we were unable to recover it. 00:33:41.931 [2024-07-24 09:19:19.844954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.931 [2024-07-24 09:19:19.844980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.845123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.845149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.845260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.845286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.845444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.845486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.845648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.845678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.845831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.845860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.845984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.846008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.846127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.846151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.846267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.846292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.846464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.846490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.846632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.846659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.846815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.846845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.847095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.847152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.847292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.847317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.847427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.847453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.847644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.847673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.847797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.847825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.847987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.848015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.848184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.848212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.848341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.848371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.848519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.848563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.848700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.848726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.848885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.848910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.849077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.849110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.849244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.849283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.849470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.849500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.849652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.849680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.849889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.849946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.850095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.850156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.850325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.850490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.850518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.850693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.850719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.850915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.850942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.851092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.851143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.851280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.851305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.851445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.851470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.851623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.851649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.851804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.932 [2024-07-24 09:19:19.851831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.932 qpair failed and we were unable to recover it. 00:33:41.932 [2024-07-24 09:19:19.852000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.852027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.852167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.852193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.852339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.852364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.852540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.852567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.852757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.852808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.852963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.852991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.853154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.853180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.853320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.853344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.853467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.853494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.853615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.853642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.853797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.853826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.853949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.853977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.854168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.854194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.854312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.854350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.854522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.854551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.854732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.854760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.854908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.854937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.855087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.855118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.855259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.855284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.855425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.855467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.855620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.855648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.855768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.855796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.855983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.856011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.856155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.856196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.856404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.856431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.856586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.856644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.856801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.856829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.856977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.857159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.857309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.857455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.857596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.857747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.857964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.857991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.858097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.858139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.858264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.858289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.858405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.858430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.933 [2024-07-24 09:19:19.858562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.933 [2024-07-24 09:19:19.858586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.933 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.858750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.858775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.858919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.858945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.859070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.859116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.859287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.859314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.859527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.859556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.859897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.859951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.860079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.860110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.860223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.860248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.860407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.860436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.860582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.860611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.860760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.860789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.860964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.860993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.861182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.861209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.861346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.861388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.861601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.861629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.861777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.861805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.861955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.861984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.862142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.862169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.862307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.862332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.862509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.862550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.862766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.862795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.862947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.862981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.863148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.863174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.863313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.863339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.863522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.863550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.863735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.863776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.863924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.863953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.864127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.864170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.864308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.864334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.864583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.864633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.864823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.864851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.864976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.934 [2024-07-24 09:19:19.865004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.934 qpair failed and we were unable to recover it. 00:33:41.934 [2024-07-24 09:19:19.865194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.865221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.865361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.865402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.865525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.865553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.865718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.865748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.865915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.865944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.866066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.866095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.866273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.866298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.866481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.866510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.866672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.866713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.866930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.866956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.867073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.867098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.867224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.867263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.867449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.867479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.867645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.867671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.867820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.867847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.868037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.868062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.868199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.868226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.868340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.868365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.868494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.868522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.868670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.868698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.868820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.868847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.869006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.869033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.869175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.869201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.869317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.869342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.869520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.869548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.869669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.869696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.869872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.869899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.870050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.870075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.870205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.870230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.870348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.870376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.870564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.870592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.870769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.870797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.870926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.870954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.871119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.871158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.871304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.871331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.871440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.871468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.871630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.871659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.935 qpair failed and we were unable to recover it. 00:33:41.935 [2024-07-24 09:19:19.871835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.935 [2024-07-24 09:19:19.871878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.872023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.872051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.872204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.872230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.872365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.872407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.872561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.872588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.872719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.872746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.872903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.872931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.873088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.873136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.873260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.873287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.873446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.873490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.873627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.873669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.873801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.873844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.874006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.874032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.874184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.874211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.874345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.874370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.874531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.874559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.874778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.874840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.875031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.875074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.875251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.875295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.875525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.875582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.875810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.875861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.876013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.876041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.876197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.876223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.876404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.876432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.876578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.876606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.876734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.876778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.876956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.876984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.877144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.877171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.877311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.877337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.877562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.877614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.877793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.877822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.877985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.878013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.878223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.878263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.878419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.878450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.878672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.878726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.878908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.878936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.879086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.879121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.879302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.936 [2024-07-24 09:19:19.879327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.936 qpair failed and we were unable to recover it. 00:33:41.936 [2024-07-24 09:19:19.879432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.879475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.879624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.879653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.879837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.879864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.879990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.880019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.880180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.880206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.880369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.880394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.880554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.880601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.880753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.880975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.881125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.881312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.881477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.881665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.881826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.881964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.881992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.882160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.882199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.882372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.882416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.882550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.882593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.882722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.882748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.882924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.882952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.883115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.883154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.883304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.883331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.883518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.883546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.883765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.883794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.883934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.883978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.884130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.884172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.884337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.884378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.884562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.884588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.884778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.884806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.884980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.885009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.885164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.885192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.885364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.885406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.885559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.885588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.885762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.885791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.885938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.885967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.886126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.886183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.886301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.937 [2024-07-24 09:19:19.886327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.937 qpair failed and we were unable to recover it. 00:33:41.937 [2024-07-24 09:19:19.886461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.886491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.886667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.886696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.886846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.886874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.887050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.887078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.887236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.887262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.887411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.887469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.887632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.887677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.887887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.887934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.888099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.888130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.888263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.888288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.888417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.888461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.888641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.888712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.888890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.888916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.889058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.889084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.889251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.889294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.889456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.889498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.889656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.889685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.889855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.889883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.890064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.890089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.890274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.890318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.890452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.890495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.890683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.890726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.890921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.890977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.891118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.891144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.891280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.891323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.891458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.891506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.891694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.891738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.891850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.891876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.892013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.892039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.892240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.892283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.892470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.892499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.892681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.892709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.892842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.892867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.893028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.893054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.893211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.893241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.893371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.893399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.893552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.893580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.938 [2024-07-24 09:19:19.893728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.938 [2024-07-24 09:19:19.893756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.938 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.893913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.893941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.894084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.894114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.894228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.894253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.894393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.894435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.894622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.894664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.894887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.894929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.895067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.895116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.895276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.895320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.895477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.895519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.895671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.895704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.895888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.895914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.896033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.896059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.896225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.896275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.896460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.896496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.896648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.896675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.896877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.896923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.897055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.897083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.897247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.897275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.897408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.897437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.897609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.897637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.897776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.897801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.897970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.898003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.898063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1285470 (9): Bad file descriptor 00:33:41.939 [2024-07-24 09:19:19.898237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.898295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.898454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.898484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.898638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.898666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.898835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.898883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.899032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.899066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.899223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.899251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.899452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.899482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.899637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.899665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.939 [2024-07-24 09:19:19.899833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.939 [2024-07-24 09:19:19.899907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.939 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.900084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.900114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.900256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.900281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.900419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.900444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.900731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.900779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.900899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.900927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.901048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.901077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.901239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.901278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.901429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.901477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.901659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.901692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.901851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.901899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.902017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.902042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.902214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.902259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.902420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.902450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.902631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.902659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.902924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.902974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.903130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.903178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.903294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.903337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.903488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.903517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.903768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.903821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.903983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.904008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.904170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.904196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.904333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.904358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.904507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.904535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.904684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.904712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.904894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.904922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.905071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.905099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.905304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.905332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.905455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.905483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.905633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.905661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.905810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.905837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.905988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.906015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.906179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.906204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.906367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.906414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.906605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.906664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.906804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.906848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.906986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.907016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.907173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.940 [2024-07-24 09:19:19.907217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.940 qpair failed and we were unable to recover it. 00:33:41.940 [2024-07-24 09:19:19.907360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.907385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.907567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.907640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.907783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.907853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.907987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.908012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.908127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.908157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.908293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.908336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.908520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.908562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.908742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.908790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.908905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.908931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.909073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.909100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.909239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.909268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.909461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.909508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.909695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.909738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.909878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.909903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.910039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.910065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.910240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.910285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.910418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.910459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.910635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.910699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.910841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.910866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.910982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.911008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.911176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.911221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.911353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.911379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.911521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.911546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.911680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.911705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.911844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.911873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.912015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.912040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.912152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.912178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.912346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.912374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.912520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.912566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.912705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.912730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.912850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.912876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.913020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.913046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.913206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.913250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.913405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.913447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.913608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.913674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.913810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.913835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.913971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.913996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.941 qpair failed and we were unable to recover it. 00:33:41.941 [2024-07-24 09:19:19.914157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.941 [2024-07-24 09:19:19.914201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.914388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.914421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.914549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.914574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.914713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.914738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.914846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.914871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.915014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.915039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.915166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.915194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.915388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.915432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.915592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.915622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.915779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.915809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.915943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.915968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.916110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.916136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.916254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.916279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.916428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.916456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.916628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.916657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.916793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.916833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.917014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.917039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.917161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.917187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.917349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.917604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.917664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.917840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.917867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.918043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.918071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.918263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.918288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.918445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.918473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.918601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.918631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.918759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.918788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.918971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.918999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.919183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.919222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.919375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.919403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.919568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.919611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.919797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.919852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.919984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.920009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.920145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.920173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.920322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.920370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.920557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.920600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.920739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.920765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.920910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.942 [2024-07-24 09:19:19.920935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.942 qpair failed and we were unable to recover it. 00:33:41.942 [2024-07-24 09:19:19.921111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.921138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.921265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.921308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.921486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.921547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.921747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.921800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.921917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.921947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.922061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.922086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.922251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.922294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.922428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.922458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.922679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.922731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.922939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.922963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.923078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.923109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.923243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.923271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.923406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.923433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.923565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.923608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.923788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.923816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.923945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.923970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.924083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.924113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.924223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.924247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.924425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.924453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.924566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.924594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.924764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.924792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.924991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.925036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.925180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.925205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.925381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.925407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.925606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.925659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.925799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.925843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.925999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.926038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.926209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.926236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.926387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.926415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.926600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.926666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.926882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.926934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.927128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.927154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.927271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.927298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.943 qpair failed and we were unable to recover it. 00:33:41.943 [2024-07-24 09:19:19.927434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.943 [2024-07-24 09:19:19.927476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.927652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.927680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.927827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.927855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.928007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.928036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.928215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.928241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.928437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.928496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.928673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.928734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.928888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.928916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.929067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.929092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.929257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.929296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.929439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.929470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.929644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.929678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.929968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.930034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.930181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.930207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.930372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.930397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.930597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.930638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.930766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.930794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.931035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.931063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.931231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.931258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.931429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.931458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.931635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.931663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.931847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.931876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.932021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.932063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.932272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.932299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.932464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.932492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.932658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.932719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.932878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.932904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.933059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.933088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.933253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.933278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.933397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.933421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.933531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.933556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.933748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.933775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.933987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.934014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.934182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.934207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.934373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.934415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.934572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.934597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.934820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.934874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.935008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.935040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.944 [2024-07-24 09:19:19.935196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.944 [2024-07-24 09:19:19.935228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.944 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.935371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.935415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.935572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.935601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.935775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.935803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.935933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.935963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.936089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.936127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.936262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.936288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.936453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.936482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.936727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.936781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.936918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.936959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.937134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.937176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.937314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.937338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.937529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.937555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.937705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.937733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.937970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.938023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.938172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.938198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.938358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.938399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.938668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.938719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.939004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.939057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.939226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.939252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.939405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.939433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.939594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.939618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.939829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.939895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.940055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.940083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.940222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.940247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.940358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.940382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.940619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.940675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.940927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.940985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.941108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.941150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.941265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.941289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.941434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.941458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.941602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.941642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.941816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.941859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.942044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.942072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.942235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.942274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.942423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.942454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.942622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.942647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.942885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.942936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.943064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.943095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.943243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.943268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.943410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.943450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.945 [2024-07-24 09:19:19.943677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.945 [2024-07-24 09:19:19.943731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.945 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.943889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.943914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.944057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.944098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.944237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.944261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.944405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.944430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.944568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.944592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.944729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.944755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.944898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.944924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.945110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.945139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.945296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.945320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.945432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.945457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.945592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.945617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.945782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.945809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.945945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.945974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.946137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.946163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.946293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.946321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.946459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.946485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.946628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.946654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.946766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.946791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.946922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.946947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.947098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.947131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.947264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.947289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.947400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.947425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.947557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.947582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.947770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.947798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.947960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.947985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.948111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.948136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.948269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.948294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.948407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.948434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.948574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.948616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.948768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.948795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.948939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.948964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.949135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.949174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.949344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.949373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.949563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.949590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.949739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.949768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.949988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.950017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.950175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.950203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.950396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.950424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.950575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.950604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.950764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.950795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.950931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.950974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.951126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.951155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.946 qpair failed and we were unable to recover it. 00:33:41.946 [2024-07-24 09:19:19.951341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.946 [2024-07-24 09:19:19.951367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.951567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.951625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.951785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.951811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.951966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.951996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.952173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.952200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.952335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.952362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.952532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.952557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.952686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.952729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.952916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.952941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.953053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.953078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.953246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.953271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.953418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.953460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.953594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.953619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.953777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.953804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.953971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.954001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.954190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.954216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.954326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.954369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.954536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.954562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.954706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.954732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.954877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.954903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.955055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.955082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.955232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.955257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.955401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.955442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.955601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.955629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.955794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.955820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.956017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.956047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.956226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.956255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.956386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.956412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.956553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.956579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.956757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.956785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.956948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.956973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.957163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.957193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.957358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.957384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.957497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.957522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.957635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.957661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.957800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.957830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.957981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.958172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.958344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.958475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.958613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.958797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.958958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.958983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.959128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.959154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.947 [2024-07-24 09:19:19.959310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.947 [2024-07-24 09:19:19.959337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.947 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.959477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.959502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.959615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.959639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.959769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.959793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.959957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.959981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.960122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.960148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.960290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.960315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.960463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.960488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.960666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.960719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.960843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.960871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.961003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.961028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.961151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.961176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.961342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.961370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.961510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.961535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.961674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.961699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.961863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.961891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.962027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.962052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.962224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.962283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.962419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.962450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.962611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.962638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.962752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.962778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.962972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.963000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.963166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.963193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.963384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.963412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.963573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.963602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.963740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.963766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.963910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.963954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.964111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.964156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.964287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.948 [2024-07-24 09:19:19.964312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.948 qpair failed and we were unable to recover it. 00:33:41.948 [2024-07-24 09:19:19.964454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.964499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.964675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.964702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.964831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.964856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.964994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.965019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.965209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.965238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.965372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.965397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.965561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.965603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.965745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.965771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.965939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.965965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.966120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.966148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.966278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.966306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.966463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.966488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.966668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.966696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.966867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.966892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.967064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.967214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.967374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.967520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.967713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.967866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.967998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.968161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.968355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.968509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.968677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.968835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.968975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.968999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.969120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.969146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.969256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.969281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.969415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.969440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.969593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.969626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.969825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.969850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.969994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.970021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.970160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.949 [2024-07-24 09:19:19.970186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.949 qpair failed and we were unable to recover it. 00:33:41.949 [2024-07-24 09:19:19.970295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.970319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.970455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.970480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.970635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.970663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.970816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.970844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.971029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.971054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.971216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.971244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.971365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.971393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.971574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.971599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.971745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.971770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.971876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.971901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.972017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.972043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.972172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.972212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.972362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.972405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.972534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.972560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.972694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.972720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.972855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.972883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.973034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.973059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.973208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.973252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.973410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.973435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.973563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.973588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.973771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.973798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.973963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.973988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.974121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.974147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.974329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.974357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.974517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.974542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.974704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.974733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.974898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.974926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.975055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.975082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.975234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.975259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.975376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.975402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.975593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.975621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.975776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.975801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.975941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.975983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.976163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.976189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.976331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.976357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.976510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.976538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.976684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.976712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.950 [2024-07-24 09:19:19.976868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.950 [2024-07-24 09:19:19.976894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.950 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.977038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.977063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.977178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.977203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.977361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.977388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.977542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.977569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.977698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.977725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.977882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.977908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.978043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.978068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.978219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.978249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.978386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.978411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.978571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.978598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.978747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.978775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.978897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.978922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.979035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.979061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.979246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.979272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.979378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.979407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.979519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.979544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.979686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.979710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.979853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.979878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.980063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.980091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.980276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.980304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.980470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.980495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.980716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.980772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.980948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.980973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.981149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.981175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.981321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.981346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.981522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.981548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.981713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.981738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.981879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.981904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.982070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.982249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.982403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.982554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.982722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.982859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.982977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.983002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.983158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.983184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.983323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.983348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.951 qpair failed and we were unable to recover it. 00:33:41.951 [2024-07-24 09:19:19.983520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.951 [2024-07-24 09:19:19.983548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.983683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.983708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.983818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.983845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.984942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.984968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.985112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.985139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.985325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.985507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.985535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.985660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.985685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.985836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.985861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.985974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.985999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:41.952 [2024-07-24 09:19:19.986126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:41.952 [2024-07-24 09:19:19.986157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:41.952 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.986277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.986303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.986424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.986450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.986589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.986614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.986723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.986749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.986867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.986892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.986997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.987137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.987272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.987417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.987608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.987750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.987929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.987954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.988936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.988961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.989073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.989099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.989221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.989246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.989365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.989390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.989501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.989525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.989637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.237 [2024-07-24 09:19:19.989662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.237 qpair failed and we were unable to recover it. 00:33:42.237 [2024-07-24 09:19:19.989807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.989832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.989944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.989969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.990112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.990138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.990253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.990279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.990424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.990464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.990611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.990639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.990759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.990785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.990904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.990932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.991956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.991983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.992164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.992190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.992323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.992348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.992503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.992528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.992665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.992691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.992806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.992831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.992971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.992996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.993133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.993167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.993275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.993299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.993416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.993440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.993558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.993583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.993722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.993748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.993863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.993888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.994001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.994027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.994155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.994182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.994324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.994350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.994534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.994560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.994680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.994706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.994817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.994842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.995003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.995031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.995192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.995218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.995353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.995378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.995499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.995523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.238 qpair failed and we were unable to recover it. 00:33:42.238 [2024-07-24 09:19:19.995683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.238 [2024-07-24 09:19:19.995708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.995846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.995871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.995988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.996969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.996994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.997139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.997170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.997285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.997310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.997454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.997479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.997593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.997618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.997755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.997780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.997892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.997918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.998077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.998223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.998361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.998520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.998680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.998844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.998999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.999189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.999328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.999470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.999633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.999793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:19.999926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:19.999952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.000946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.000971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.001084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.001115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.001225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.001251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.239 [2024-07-24 09:19:20.001361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.239 [2024-07-24 09:19:20.001386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.239 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.001506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.001532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.001660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.001686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.001794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.001818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.001931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.001957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.002087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.002120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.002248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.002274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.002393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.002419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.003584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.003627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.003789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.003818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.003975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.004008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.004174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.004200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.004322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.004347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.004494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.004522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.004674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.004702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.004821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.004850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.004996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.005139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.005280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.005425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.005619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.005792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.005958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.005990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.006140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.006175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.006320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.006351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.006496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.006528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.006681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.006717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.006891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.006925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.007075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.007226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.007364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.007526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.007707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.007846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.007985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.008010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.008162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.008189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.008300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.008326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.008433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.008458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.008609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.240 [2024-07-24 09:19:20.008635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.240 qpair failed and we were unable to recover it. 00:33:42.240 [2024-07-24 09:19:20.008745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.008771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.008911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.008937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.009077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.009111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.009296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.009321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.009452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.009480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.009635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.009660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.009800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.009825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.009964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.009989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.010201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.010227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.010369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.010410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.010547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.010573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.010685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.010711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.010856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.010882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.011050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.011216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.011359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.011542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.011706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.011848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.011984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.012150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.012298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.012464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.012609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.012757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.012922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.012950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.013141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.013167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.013328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.013353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.013533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.013560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.013677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.013703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.013818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.013844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.013980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.014024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.014161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.241 [2024-07-24 09:19:20.014187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.241 qpair failed and we were unable to recover it. 00:33:42.241 [2024-07-24 09:19:20.014302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.014327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.014463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.014507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.014634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.014660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.014803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.014828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.014942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.014967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.015141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.015167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.015305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.015335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.015465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.015494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.015645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.015670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.015807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.015849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.015997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.016025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.016164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.016190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.016335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.016360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.016526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.016555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.016712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.016737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.016881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.016907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.017105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.017132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.017247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.017274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.017438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.017480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.017637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.017665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.017828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.017854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.017999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.018043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.018202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.018229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.018345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.018371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.018527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.018571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.018720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.018748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.018966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.018994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.019149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.019176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.019319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.019345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.019459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.019484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.019626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.019669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.019802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.019830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.019966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.019991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.020127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.020158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.020272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.020297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.242 [2024-07-24 09:19:20.020413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.242 [2024-07-24 09:19:20.020439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.242 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.020541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.020566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.020700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.020728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.020855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.020881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.021019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.021045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.021221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.021250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.021371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.021397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.021532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.021574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.021748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.021776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.021905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.021930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.022097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.022146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.022277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.022303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.022447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.022472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.022603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.022646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.022806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.022832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.022971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.022996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.023114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.023140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.023251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.023276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.023387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.023412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.023552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.023577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.023709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.023734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.023871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.023897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.024040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.024065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.024230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.024270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.024387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.024415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.024552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.024583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.024721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.024747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.024886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.024912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.025955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.025981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.026096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.026135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.026303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.026329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.026470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.026496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.243 qpair failed and we were unable to recover it. 00:33:42.243 [2024-07-24 09:19:20.026610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.243 [2024-07-24 09:19:20.026635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.026781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.026807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.026925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.026952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.027092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.027124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.027244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.027271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.027408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.027433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.027539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.027564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.027674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.027700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.027813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.027840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.028021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.028060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.028197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.028224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.028365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.028390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.028531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.028557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.028694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.028720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.028835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.028867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.029035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.029191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.029352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.029486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.029623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.029800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.029975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.030003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.030147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.030173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.030312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.030337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.030489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.030517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.030670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.030696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.030834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.030877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.031005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.031034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.031197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.031223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.031332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.031358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.031496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.031525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.031707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.031733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.031872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.031897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.032026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.032054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.032177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.032203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.032345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.244 [2024-07-24 09:19:20.032371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.244 qpair failed and we were unable to recover it. 00:33:42.244 [2024-07-24 09:19:20.032509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.032551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.032712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.032738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.032892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.032917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.033038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.033065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.033188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.033214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.033353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.033382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.033564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.033592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.033730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.033755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.033878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.033903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.034122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.034170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.034296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.034321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.034432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.034459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.034661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.034687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.034829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.034855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.034969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.034995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.035243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.035273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.035421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.035446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.035581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.035621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.035755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.035783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.035941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.035967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.036112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.036138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.036256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.036283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.036393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.036419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.036567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.036608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.036760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.036788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.036924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.036949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.037067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.037093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.037221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.037247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.037357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.037383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.037504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.037530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.037729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.037755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.037922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.037948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.038067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.038092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.038242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.038281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.038407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.038433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.038573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.038598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.038713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.038738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.038868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.245 [2024-07-24 09:19:20.038895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.245 qpair failed and we were unable to recover it. 00:33:42.245 [2024-07-24 09:19:20.039041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.039069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.039246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.039272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.039390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.039415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.039558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.039598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.039753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.039795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.039948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.039977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.040114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.040140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.040247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.040277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.040427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.040453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.040588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.040613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.040794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.040821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.040944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.040972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.041128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.041173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.041308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.041333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.041470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.041505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.041643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.041668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.041789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.041813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.041947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.041972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.042131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.042171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.042309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.042334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.042505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.042530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.042689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.042716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.042887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.042913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.043064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.043089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.043230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.043255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.043411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.043467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.043657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.043684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.043813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.043840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.044009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.044036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.044176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.044204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.044318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.044344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.044478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.246 [2024-07-24 09:19:20.044504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.246 qpair failed and we were unable to recover it. 00:33:42.246 [2024-07-24 09:19:20.044624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.044649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.044765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.044791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.044958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.044990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.045132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.045158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.045298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.045325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.045453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.045480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.045612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.045637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.045771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.045796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.045935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.045962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.046070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.046095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.046236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.046261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.046436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.046463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.046608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.046634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.046758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.046783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.046955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.046981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.047122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.047147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.047265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.047291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.047409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.047437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.047578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.047604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.047753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.047779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.047890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.047915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.048079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.048249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.048395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.048535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.048691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.048876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.048994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.049153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.049319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.049491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.049628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.049793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.049931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.049956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.050096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.050130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.050262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.050287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.050396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.050422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.050559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.247 [2024-07-24 09:19:20.050585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.247 qpair failed and we were unable to recover it. 00:33:42.247 [2024-07-24 09:19:20.050723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.050749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.050912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.050937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.051087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.051117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.051238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.051264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.051402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.051428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.051548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.051573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.051679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.051705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.051924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.051949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.052066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.052091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.052244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.052283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.052415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.052442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.052581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.052606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.052744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.052769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.052880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.052905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.053067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.053093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.053211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.053236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.053370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.053395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.053558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.053583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.053730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.053758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.053882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.053910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.054073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.054098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.054225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.054250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.054359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.054386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.054544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.054569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.054714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.054742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.054917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.054942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.055075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.055100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.055253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.055278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.055412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.055437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.055602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.055628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.055743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.055770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.055911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.055937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.056075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.056106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.056224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.056250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.056384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.056409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.056520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.056545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.056659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.056685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.248 [2024-07-24 09:19:20.056799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.248 [2024-07-24 09:19:20.056824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.248 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.056947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.056974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.057125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.057151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.057262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.057288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.057407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.057434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.057576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.057601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.057720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.057745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.057888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.057914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.058026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.058057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.058206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.058232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.058375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.058401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.058536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.058563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.058706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.058732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.058859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.058885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.059051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.059238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.059381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.059525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.059685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.059839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.059981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.060143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.060288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.060425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.060589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.060757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.060937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.060962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.061125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.061151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.061290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.061315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.061460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.061485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.061652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.061677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.061782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.061807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.061912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.061937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.062095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.062238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.062379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.062571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.062712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.249 [2024-07-24 09:19:20.062881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.249 qpair failed and we were unable to recover it. 00:33:42.249 [2024-07-24 09:19:20.062994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.063019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.063179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.063205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.063318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.063343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.063478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.063502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.063642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.063666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.063829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.063854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.063994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.064019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.064142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.064169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.064282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.064307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.064478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.064503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.064654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.064680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.064841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.064866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.065939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.065964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.066099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.066130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.066240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.066264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.066381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.066406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.066544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.066569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.066708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.066737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.066876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.066901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.067971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.067996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.068136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.068162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.068273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.068298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.068458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.068498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.068661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.068686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.068821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.068845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.068991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.250 [2024-07-24 09:19:20.069015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.250 qpair failed and we were unable to recover it. 00:33:42.250 [2024-07-24 09:19:20.069155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.069180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.069318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.069342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.069521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.069549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.069725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.069753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.069917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.069942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.070083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.070115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.070232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.070257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.070442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.070467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.070624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.070651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.070809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.070834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.071001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.071026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.071144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.071169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.071284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.071309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.071478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.071503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.071659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.071686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.071837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.071864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.072015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.072041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.072161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.072187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.072299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.072324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.072497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.072523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.072672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.072700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.072824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.072851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.073011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.073037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.073175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.073201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.073344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.073369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.073515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.073539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.073656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.073681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.073826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.073854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.074017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.074042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.074195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.074220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.074361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.074403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.074535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.251 [2024-07-24 09:19:20.074559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.251 qpair failed and we were unable to recover it. 00:33:42.251 [2024-07-24 09:19:20.074674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.074699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.074859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.074884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.075046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.075071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.075207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.075232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.075398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.075426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.075593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.075619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.075755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.075798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.075961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.075986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.076125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.076153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.076270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.076296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.076466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.076494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.076622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.076647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.076785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.076809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.076998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.077023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.077164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.077190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.077355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.077379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.077518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.077547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.077681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.077706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.077851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.077891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.078047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.078073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.078231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.078257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.078391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.078420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.078582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.078610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.078803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.078828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.078978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.079003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.079167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.079193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.079313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.079338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.079454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.079480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.079634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.079662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.079784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.079809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.079984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.080009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.080179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.080205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.080339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.080363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.080501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.080526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.080670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.080696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.080842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.080867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.081004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.081030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.081158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.081183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.252 qpair failed and we were unable to recover it. 00:33:42.252 [2024-07-24 09:19:20.081304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.252 [2024-07-24 09:19:20.081329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.081468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.081494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.081675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.081703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.081866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.081892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.082955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.082983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.083119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.083145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.083307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.083333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.083440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.083465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.083629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.083654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.083790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.083815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.083956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.083981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.084114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.084140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.084280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.084306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.084469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.084495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.084632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.084658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.084802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.084827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.084964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.084990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.085156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.085182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.085327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.085352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.085534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.085559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.085672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.085698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.085838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.085863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.085999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.086024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.086149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.086176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.086321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.086347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.086516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.086542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.086734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.086760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.253 [2024-07-24 09:19:20.086910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.253 [2024-07-24 09:19:20.086935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.253 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.087070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.087095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.087296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.087321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.087459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.087484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.087628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.087653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.087843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.087872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.088036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.088061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.088226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.088252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.088439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.088467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.088652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.088683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.088844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.088870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.088975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.089001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.089167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.089193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.089336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.089361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.089509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.089534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.089668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.089694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.089831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.089856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.090022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.090047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.090193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.090218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.090358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.090383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.090519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.090544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.090717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.090743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.090859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.090883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.091045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.091070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.091215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.091241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.091402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.091430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.091621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.091646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.091785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.091810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.091991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.092019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.092182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.092207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.092340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.092365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.092470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.092495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.092684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.092712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.092877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.092902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.093056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.093084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.093264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.093290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.093412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.093437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.093614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.093639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.254 [2024-07-24 09:19:20.093800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.254 [2024-07-24 09:19:20.093824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.254 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.093986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.094010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.094190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.094218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.094399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.094427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.094584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.094609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.094753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.094780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.094934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.094959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.095124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.095162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.095333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.095358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.095526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.095551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.095662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.095687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.095796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.095831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.095992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.096017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.096189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.096215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.096344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.096372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.096548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.096576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.096765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.096790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.096932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.096957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.097099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.097129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.097275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.097300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.097451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.097479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.097638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.097666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.097825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.097849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.098033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.098061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.098224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.098249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.098413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.098438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.098568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.098596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.098742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.098770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.098925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.098954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.099113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.099166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.099281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.099305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.099429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.099456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.099592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.099633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.099792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.255 [2024-07-24 09:19:20.099817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.255 qpair failed and we were unable to recover it. 00:33:42.255 [2024-07-24 09:19:20.099956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.099985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.100147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.100173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.100341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.100370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.100536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.100561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.100697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.100737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.100926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.100954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.101131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.101157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.101313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.101340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.101487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.101514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.101673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.101699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.101883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.101910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.102074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.102099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.102220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.102245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.102358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.102384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.102559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.102584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.102751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.102776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.102934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.102967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.103129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.103167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.103298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.103324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.103450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.103474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.103615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.103640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.103749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.103774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.103909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.103934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.104129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.104176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.104311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.104336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.104469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.104494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.104611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.104636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.104752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.104782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.104922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.104947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.105064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.105089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.105306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.105332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.256 qpair failed and we were unable to recover it. 00:33:42.256 [2024-07-24 09:19:20.105462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.256 [2024-07-24 09:19:20.105489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.105664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.105689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.105835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.105860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.105997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.106039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.106169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.106196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.106332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.106357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.106498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.106525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.106690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.106724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.106861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.106887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.107023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.107047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.107200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.107226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.107381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.107405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.107589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.107618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.107773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.107801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.107944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.107969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.108113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.108139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.108279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.108304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.108422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.108447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.108566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.108591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.108767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.108792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.108972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.108999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.109162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.109188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.109328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.109353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.109470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.109496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.109638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.109680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.109827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.109855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.109997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.110022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.110158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.110184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.110293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.110317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.110501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.110526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.110662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.110703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.110862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.110887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.110998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.257 [2024-07-24 09:19:20.111025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.257 qpair failed and we were unable to recover it. 00:33:42.257 [2024-07-24 09:19:20.111201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.111228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.111381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.111409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.111568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.111593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.111736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.111781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.111941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.111969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.112136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.112162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.112280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.112305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.112439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.112464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.112610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.112635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.112767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.112792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.112938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.112963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.113111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.113136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.113260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.113285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.113449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.113477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.113622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.113647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.113786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.113812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.113959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.113986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.114149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.114174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.114335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.114364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.114493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.114522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.114710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.114735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.114845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.114870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.115959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.115986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.116116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.116142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.116280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.116305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.116442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.116474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.116656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.116681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.116800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.116825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.116972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.116997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.117137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.117173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.117290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.117317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.117491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.258 [2024-07-24 09:19:20.117520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.258 qpair failed and we were unable to recover it. 00:33:42.258 [2024-07-24 09:19:20.117705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.117730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.117835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.117877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.118040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.118067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.118241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.118267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.118424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.118452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.118625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.118652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.118810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.118835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.118985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.119187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.119319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.119514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.119673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.119810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.119941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.119968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.120129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.120165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.120348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.120373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.120526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.120553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.120698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.120725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.120888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.120913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.121083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.121121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.121286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.121317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.121486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.121511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.121648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.121673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.121862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.121889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.122075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.122268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.122438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.122603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.122735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.122863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.122978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.123003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.123137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.123181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.123294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.123320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.123443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.259 [2024-07-24 09:19:20.123468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.259 qpair failed and we were unable to recover it. 00:33:42.259 [2024-07-24 09:19:20.123639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.123664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.123822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.123848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.123990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.124016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.124171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.124197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.124315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.124340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.124455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.124480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.124641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.124682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.124863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.124887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.125005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.125031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.125228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.125255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.125389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.125414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.125525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.125550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.125687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.125714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.125901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.125926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.126966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.126991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.127107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.127132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.127268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.127293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.127438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.127463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.127579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.127604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.127744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.127769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.127908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.127933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.128092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.128141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.128332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.128370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.128538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.128565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.128704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.128729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.128868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.128893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.129012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.129038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.129178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.129204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.129370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.129395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.129529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.129555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.129697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.129723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.129891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.260 [2024-07-24 09:19:20.129916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.260 qpair failed and we were unable to recover it. 00:33:42.260 [2024-07-24 09:19:20.130051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.130075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.130213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.130239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.130375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.130400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.130529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.130555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.130674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.130698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.130828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.130852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.131935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.131959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.132099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.132268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.132412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.132573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.132737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.132878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.132995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.133164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.133320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.133457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.133601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.133789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.133927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.133954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.134067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.134093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.134221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.134246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.134384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.134408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.134522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.134553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.134669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.261 [2024-07-24 09:19:20.134693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.261 qpair failed and we were unable to recover it. 00:33:42.261 [2024-07-24 09:19:20.134827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.134853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.134961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.134987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.135100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.135131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.135307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.135331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.135496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.135521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.135641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.135667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.135815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.135840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.135984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.136008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.136195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.136222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.136380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.136405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.136568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.136593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.136741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.136766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.136907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.136932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.137057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.137081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.137251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.137278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.137413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.137438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.137565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.137590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.137754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.137779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.137921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.137945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.138075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.138240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.138399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.138535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.138677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.138837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.138980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.139139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.139312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.139476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.139607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.139772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.139946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.139970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.140095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.140126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.140235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.140259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.140423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.140447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.140617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.140642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.140754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.140780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.262 qpair failed and we were unable to recover it. 00:33:42.262 [2024-07-24 09:19:20.140896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.262 [2024-07-24 09:19:20.140921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.141035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.141066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.141208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.141249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.141403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.141429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.141569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.141594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.141733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.141758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.141896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.141920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.142080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.142111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.142253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.142278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.142388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.142414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.142548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.142573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.142715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.142743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.142884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.142909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.143049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.143073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.143200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.143233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.143353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.143379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.143522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.143547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.143688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.143713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.143853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.143878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.144065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.144249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.144397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.144568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.144704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.144885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.144997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.145022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.145153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.145182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.145294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.145319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.145434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.145463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.145634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.145677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.145800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.145828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.145987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.146200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.146341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.146498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.146665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.146828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.146966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.146991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.263 qpair failed and we were unable to recover it. 00:33:42.263 [2024-07-24 09:19:20.147134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.263 [2024-07-24 09:19:20.147160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.147305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.147330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.147471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.147496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.147632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.147673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.147832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.147860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.148044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.148069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.148242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.148281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.148405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.148431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.148552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.148577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.148714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.148739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.148858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.148883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.149019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.149045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.149161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.149187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.149326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.149351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.149494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.149519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.149661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.149686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.149833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.149858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.150078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.150128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.150310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.150349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.150512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.150540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.150661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.150689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.150890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.150919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.151119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.151153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.151303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.151328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.151499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.151524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.151682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.151709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.151873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.151901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.152050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.152078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.152229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.152257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.152442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.152470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.152635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.152669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.152836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.152864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.153020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.153048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.153200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.153227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.153340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.153365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.153507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.153532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.153693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.153721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.153900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.264 [2024-07-24 09:19:20.153927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.264 qpair failed and we were unable to recover it. 00:33:42.264 [2024-07-24 09:19:20.154054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.154083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.154273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.154312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.154497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.154523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.154695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.154739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.154886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.154912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.155024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.155049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.155245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.155272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.155396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.155421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.155560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.155585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.155728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.155754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.155890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.155915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.156055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.156080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.156208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.156247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.156398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.156424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.156548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.156574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.156711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.156736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.156851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.156879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.157016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.157042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.157192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.157219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.157334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.157368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.157485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.157511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.157700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.157743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.157900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.157925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.158070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.158095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.158253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.158279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.158415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.158442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.158558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.158584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.158727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.158753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.158868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.158894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.159031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.159056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.159228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.265 [2024-07-24 09:19:20.159253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.265 qpair failed and we were unable to recover it. 00:33:42.265 [2024-07-24 09:19:20.159396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.159421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.159562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.159588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.159750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.159792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.159930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.159955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.160111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.160137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.160250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.160274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.160378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.160403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.160549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.160573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.160712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.160737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.160875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.160900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.161016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.161040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.161209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.161248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.161441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.161480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.161654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.161681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.161824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.161849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.161968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.161993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.162138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.162164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.162290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.162315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.162457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.162485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.162599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.162625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.162752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.162792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.163024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.163161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.163326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.163542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.163673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.163834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.163981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.164008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.164172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.164202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.266 [2024-07-24 09:19:20.164340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.266 [2024-07-24 09:19:20.164364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.266 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.164521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.164547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.164694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.164721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.164916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.164948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.165117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.165152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.165287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.165312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.165461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.165487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.165654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.165682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.165853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.165896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.166058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.166084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.166220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.166245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.166385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.166410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.166550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.166575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.166734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.166761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.166883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.166909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.167121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.167157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.167284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.167309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.167452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.167478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.167632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.167674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.167831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.167874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.167987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.168153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.168327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.168487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.168621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.168768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.168947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.168979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.169119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.169159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.169332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.169358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.169501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.169527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.169666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.169691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.169836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.169863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.169976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.170153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.170297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.170433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.170594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.170730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.267 qpair failed and we were unable to recover it. 00:33:42.267 [2024-07-24 09:19:20.170919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.267 [2024-07-24 09:19:20.170944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.171050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.171077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.171229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.171257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.171381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.171406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.171550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.171576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.171730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.171758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.171911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.171940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.172074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.172123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.172276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.172304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.172430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.172457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.172638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.172666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.172792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.172821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.172973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.173001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.173190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.173217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.173375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.173403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.173577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.173618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.173864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.173898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.174093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.174152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.174318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.174359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.174487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.174529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.174683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.174725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.174897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.174942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.175099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.175149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.175310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.175352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.175490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.175535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.175687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.175713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.175876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.175902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.176078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.176123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.176289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.176334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.176506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.176535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.176667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.176708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.176822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.176849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.177004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.268 [2024-07-24 09:19:20.177029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.268 qpair failed and we were unable to recover it. 00:33:42.268 [2024-07-24 09:19:20.177165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.177192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.177324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.177352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.177531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.177557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.177737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.177764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.177881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.177908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.178081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.178117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.178252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.178277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.178423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.178447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.178588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.178615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.178762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.178789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.178934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.178961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.179155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.179180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.179310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.179337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.179461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.179489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.179614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.179641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.179851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.179909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.180061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.180099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.180231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.180258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.180391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.180419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.180579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.180608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.180823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.180877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.181013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.181041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.181217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.181248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.181361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.181385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.181546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.181571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.181760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.181786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.181934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.181963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.182109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.182154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.182292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.182316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.182496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.182522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.182680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.182707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.182880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.182922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.183064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.183089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.183256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.269 [2024-07-24 09:19:20.183282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.269 qpair failed and we were unable to recover it. 00:33:42.269 [2024-07-24 09:19:20.183392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.183416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.183551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.183591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.183747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.183775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.183955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.183997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.184124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.184167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.184280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.184305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.184449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.184474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.184629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.184657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.184781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.184809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.184961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.184987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.185135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.185178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.185289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.185315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.185455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.185480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.185589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.185631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.185801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.185828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.185996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.186023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.186203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.186228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.186366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.186390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.186523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.186550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.186667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.186694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.186843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.186886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.187062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.187100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.187283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.187310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.187456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.187483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.187643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.187687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.187819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.187863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.188031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.188056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.188212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.188238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.188368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.188409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.188527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.188553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.188660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.188685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.188849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.270 [2024-07-24 09:19:20.188875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.270 qpair failed and we were unable to recover it. 00:33:42.270 [2024-07-24 09:19:20.189021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.189060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.189220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.189248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.189405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.189448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.189625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.189652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.189829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.189856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.189991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.190016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.190158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.190184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.190294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.190319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.190430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.190455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.190645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.190673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.190816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.190849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.190976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.191004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.191154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.191179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.191344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.191369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.191557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.191584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.191840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.191867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.192035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.192062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.192226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.192251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.192397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.192425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.192615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.192642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.192791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.192818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.192959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.192998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.193154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.193183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.193323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.193350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.193478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.193504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.193642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.193667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.193813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.193838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.271 [2024-07-24 09:19:20.193993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.271 [2024-07-24 09:19:20.194032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.271 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.194190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.194229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.194395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.194424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.194589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.194615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.194808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.194836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.194987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.195015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.195171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.195196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.195323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.195351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.195503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.195532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.195681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.195708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.195828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.195864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.196002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.196027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.196191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.196217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.196355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.196396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.196536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.196564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.196777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.196805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.196944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.196969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.197119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.197149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.197291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.197316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.197432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.197458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.197624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.197652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.197802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.197830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.197983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.198010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.198155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.198180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.198352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.198377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.198563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.198591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.198757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.198784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.198919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.198947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.199109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.199153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.199320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.199345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.199554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.199581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.199734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.199762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.199939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.199967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.200129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.200154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.200293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.200318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.200429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.200454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.200626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.200654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.272 [2024-07-24 09:19:20.200791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.272 [2024-07-24 09:19:20.200821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.272 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.200997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.201025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.201177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.201205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.201349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.201374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.201569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.201597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.201747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.201775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.201943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.201968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.202079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.202109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.202281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.202306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.202423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.202449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.202587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.202613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.202765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.202807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.202994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.203021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.203185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.203215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.203356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.203384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.203530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.203558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.203738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.203766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.203960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.204021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.204144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.204173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.204312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.204338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.204456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.204483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.204648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.204690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.204826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.204869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.205018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.205061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.205197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.205228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.205359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.205385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.205581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.205609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.205768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.205796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.205926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.273 [2024-07-24 09:19:20.205951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.273 qpair failed and we were unable to recover it. 00:33:42.273 [2024-07-24 09:19:20.206064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.206089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.206235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.206260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.206394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.206418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.206576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.206603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.206754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.206781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.206933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.206961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.207127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.207156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.207300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.207343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.207495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.207538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.207692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.207735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.207877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.207902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.208045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.208071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.208246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.208275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.208424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.208451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.208598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.208625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.208759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.208804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.208957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.208986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.209155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.209196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.209327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.209356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.209526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.209553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.209701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.209728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.209917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.209944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.210110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.210136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.210242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.210267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.210393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.210426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.210586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.210615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.210809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.210837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.210983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.211010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.211183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.211209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.211346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.211371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.211554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.211582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.211730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.211758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.211909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.211938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.212119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.212145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.212312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.212337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.274 [2024-07-24 09:19:20.212472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.274 [2024-07-24 09:19:20.212497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.274 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.212653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.212679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.212825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.212867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.213028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.213056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.213196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.213225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.213364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.213389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.213577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.213620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.213780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.213822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.213927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.213952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.214088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.214143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.214270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.214298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.214476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.214524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.214684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.214726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.214839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.214865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.215002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.215027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.215183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.215212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.215432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.215462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.215596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.215637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.215782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.215809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.215964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.215989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.216147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.216173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.216337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.216362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.216511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.216553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.216737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.216764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.216883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.216910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.217065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.217090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.217273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.217298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.217443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.217472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.217652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.217680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.217808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.217841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.217995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.218022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.218156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.218182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.218330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.218355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.218480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.218508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.218667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.218697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.218820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.218848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.218984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.219010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.275 qpair failed and we were unable to recover it. 00:33:42.275 [2024-07-24 09:19:20.219146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.275 [2024-07-24 09:19:20.219172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.219311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.219337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.219508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.219535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.219751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.219779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.219930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.219957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.220080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.220114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.220252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.220277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.220496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.220537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.220693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.220721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.220962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.220990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.221192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.221219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.221358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.221399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.221620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.221647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.221794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.221819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.221950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.221978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.222134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.222160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.222302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.222328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.222489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.222516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.222705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.222732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.222911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.222939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.223118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.223161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.223301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.223325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.223435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.223475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.223635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.223661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.223867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.223894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.224079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.224116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.224359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.224384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.224544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.224572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.224737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.224764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.224903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.276 [2024-07-24 09:19:20.224930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.276 qpair failed and we were unable to recover it. 00:33:42.276 [2024-07-24 09:19:20.225114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.225143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.225362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.225403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.225530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.225563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.225748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.225776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.225934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.225962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.226151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.226177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.226316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.226341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.226459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.226501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.226654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.226682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.226851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.226878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.227099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.227130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.227244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.227271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.227410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.227435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.227583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.227610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.227757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.227799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.227947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.227974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.228142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.228185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.228319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.228344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.228481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.228506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.228641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.228683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.228837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.228866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.229035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.229061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.229205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.229233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.229367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.229393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.229533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.229557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.229667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.229692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.229797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.229821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.230013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.230040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.230195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.230220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.230366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.230392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.230542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.230585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.230797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.230824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.230938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.230966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.231125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.231150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.231256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.231281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.231441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.231468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.231676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.231717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.231868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.231896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.277 [2024-07-24 09:19:20.232038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.277 [2024-07-24 09:19:20.232064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.277 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.232205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.232230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.232361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.232389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.232543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.232571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.232720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.232752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.232907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.232937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.233051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.233079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.233234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.233273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.233438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.233483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.233662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.233688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.233819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.233863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.234024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.234049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.234183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.234228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.234358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.234403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.234537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.234563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.234715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.234761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.234905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.234933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.235073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.235098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.235247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.235272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.235456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.235484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.235637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.235666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.235812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.235840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.236020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.236045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.236161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.236186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.236323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.236349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.236500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.236540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.236698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.236727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.236892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.236920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.237078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.237109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.237253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.237279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.237454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.237481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.237684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.237712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.237853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.237881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.238029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.238058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.238203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.238229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.238367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.238392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.238565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.238593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.238770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.238797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.238946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.238974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.239117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.239159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.239305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.239330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.278 [2024-07-24 09:19:20.239485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.278 [2024-07-24 09:19:20.239513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.278 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.239634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.239663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.239845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.239874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.240034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.240066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.240203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.240229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.240378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.240416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.240559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.240604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.240788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.240832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.240965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.240991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.241123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.241149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.241303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.241346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.241501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.241544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.241719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.241746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.241886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.241912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.242049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.242075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.242245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.242289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.242416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.242462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.242621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.242663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.242775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.242800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.242944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.242968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.243073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.243098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.243296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.243339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.243508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.243551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.243712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.243754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.243869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.243894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.244033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.244058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.244238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.244282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.244417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.244447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.244569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.244598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.244745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.244773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.244910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.244935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.245042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.245067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.245238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.245269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.245402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.245431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.245578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.245605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.245759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.245786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.245963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.245990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.246161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.246187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.246352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.246379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.246533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.246560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.246714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.246742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.279 [2024-07-24 09:19:20.246921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.279 [2024-07-24 09:19:20.246971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.279 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.247114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.247141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.247282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.247307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.247510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.247553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.247714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.247758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.247895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.247920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.248087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.248117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.248238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.248263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.248418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.248446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.248600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.248629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.248806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.248834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.249013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.249040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.249191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.249230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.249422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.249466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.249624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.249651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.249781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.249829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.249950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.249976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.250118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.250144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.250331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.250375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.250537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.250579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.250750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.250777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.250944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.250970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.251086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.251119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.251298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.251325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.251477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.251506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.251675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.251718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.251828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.251854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.252001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.252040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.252206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.252237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.252365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.252399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.252577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.252605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.252781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.252809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.252937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.252962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.253125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.253152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.253283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.253308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.253448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.280 [2024-07-24 09:19:20.253477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.280 qpair failed and we were unable to recover it. 00:33:42.280 [2024-07-24 09:19:20.253628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.253656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.253804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.253833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.253973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.253998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.254114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.254142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.254306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.254332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.254471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.254499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.254673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.254701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.254886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.254914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.255086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.255120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.255306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.255331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.255460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.255488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.255605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.255633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.255864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.255893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.256036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.256063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.256226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.256251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.256397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.256425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.256571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.256598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.256746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.256774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.256978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.257035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.257178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.257206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.257369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.257398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.257548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.257591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.257726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.257771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.257911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.257936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.258096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.258154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.258322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.258365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.258523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.258567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.258731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.258775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.258912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.258938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.259074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.259099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.259261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.259303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.259454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.259496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.259684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.259712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.259864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.259895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.260034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.260059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.260199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.260242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.260404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.260446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.260602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.260645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.260757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.260782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.260939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.260963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.261100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.261133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.261239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.261264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.261404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.261428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.261557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.281 [2024-07-24 09:19:20.261600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.281 qpair failed and we were unable to recover it. 00:33:42.281 [2024-07-24 09:19:20.261715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.261740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.261854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.261881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.262021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.262046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.262183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.262228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.262386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.262429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.262591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.262634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.262803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.262828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.262965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.262990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.263132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.263157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.263310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.263352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.263508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.263553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.263688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.263714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.263877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.263902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.264033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.264072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.264240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.264270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.264432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.264458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.264629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.264657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.264773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.264801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.264951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.264979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.265166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.265196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.265402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.265445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.265635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.265678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.265843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.265886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.266017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.266042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.266201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.266245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.266402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.266430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.266574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.266617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.266752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.266778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.266893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.266918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.267033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.267064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.267230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.267275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.267461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.267489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.267673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.267715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.267881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.267906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.268046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.268074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.268284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.268313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.268449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.268492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.268649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.268677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.268830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.268859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.268991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.269017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.269127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.269154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.269311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.269339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.269484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.269513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.282 [2024-07-24 09:19:20.269668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.282 [2024-07-24 09:19:20.269696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.282 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.269878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.269906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.270062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.270090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.270269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.270294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.270419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.270447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.270597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.270625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.270744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.270771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.270900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.270929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.271108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.271136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.271336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.271365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.271493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.271522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.271643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.271671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.271830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.271864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.272054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.272079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.272214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.272241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.272382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.272425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.272611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.272654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.272818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.272860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.272998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.273025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.273175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.273204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.273375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.273418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.273603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.273646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.273822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.273847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.274018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.274042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.274206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.274237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.274358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.274387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.274515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.274549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.274727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.274756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.274926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.274950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.275089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.275121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.275289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.275316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.275448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.275476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.275623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.275651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.275866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.275910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.276028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.276054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.276192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.276218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.276381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.276424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.276586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.276629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.276756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.276803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.276913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.276938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.277109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.277136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.277264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.277310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.277468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.277511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.277654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.277679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.277842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.283 [2024-07-24 09:19:20.277868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.283 qpair failed and we were unable to recover it. 00:33:42.283 [2024-07-24 09:19:20.277992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.278019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.278179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.278209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.278367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.278394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.278517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.278546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.278698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.278726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.278872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.278900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.279079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.279110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.279276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.279301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.279472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.279497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.279633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.279662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.279813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.279842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.279992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.280021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.280177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.280203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.280343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.280368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.280529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.280572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.280693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.280735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.280899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.280927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.281057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.281084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.281236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.281274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.281437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.281480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.281668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.281711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.281851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.281901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.282043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.282069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.282236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.282265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.282418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.282460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.282599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.282642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.282753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.282780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.282948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.282975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.283124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.283166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.283315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.283343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.283492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.283520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.283646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.283675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.283826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.283855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.284018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.284044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.284182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.284208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.284354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.284395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.284599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.284627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.284 [2024-07-24 09:19:20.284808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.284 [2024-07-24 09:19:20.284836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.284 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.284966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.284994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.285154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.285182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.285352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.285393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.285540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.285568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.285744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.285771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.285891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.285919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.286094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.286136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.286318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.286344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.286503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.286531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.286705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.286732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.286970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.286998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.287134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.287160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.287270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.287296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.287433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.287459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.287616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.287644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.287795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.287823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.288010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.288039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.288197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.288223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.288408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.288435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.288654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.288681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.288809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.288836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.289014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.289041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.289210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.289237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.289368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.289398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.289547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.289589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.289762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.289790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.289965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.289992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.285 [2024-07-24 09:19:20.290162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.285 [2024-07-24 09:19:20.290188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.285 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.290306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.290332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.290470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.290513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.290663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.290692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.290933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.290961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.291125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.291168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.291305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.291330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.291467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.291492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.291609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.291653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.291799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.291827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.292000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.292029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.292159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.292185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.292323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.292348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.292516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.292542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.292704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.292733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.292854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.292882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.293066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.293094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.293244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.293270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.293425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.293453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.293630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.293658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.293834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.293863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.294009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.294037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.294179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.294205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.294346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.294371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.294513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.294539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.294674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.294700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.294855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.294883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.295041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.295084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.295245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.295271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.295429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.295457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.295624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.295649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.295788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.295813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.295978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.296020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.296179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.296208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.296394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.296419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.286 [2024-07-24 09:19:20.296548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.286 [2024-07-24 09:19:20.296576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.286 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.296756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.296789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.296920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.296945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.297047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.297072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.297213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.297244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.297392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.297418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.297600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.297628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.297774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.297801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.297936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.297961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.298076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.298113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.298245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.298270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.298442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.298467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.298591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.298618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.298773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.298802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.298958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.298983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.299112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.299153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.299272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.299300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.299453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.299478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.299636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.299661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.299833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.299859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.300038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.300216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.300379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.300538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.300676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.300814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.300980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.301005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.301160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.301190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.301363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.301404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.301594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.301625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.301811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.301854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.302012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.302042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.302228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.302255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.302395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.302420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.302581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.302623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.302749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.302774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.287 [2024-07-24 09:19:20.302911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.287 [2024-07-24 09:19:20.302936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.287 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.303074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.303108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.303296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.303320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.303501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.303528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.303652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.303679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.303811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.303836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.304003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.304028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.304165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.304190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.304341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.304366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.304503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.304528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.304685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.304713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.304872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.304899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.305088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.305225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.305250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.305392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.305418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.305604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.305632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.305802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.305852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.306029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.306057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.306229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.306254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.306405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.306444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.306625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.306652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.306771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.306797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.306912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.306938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.307079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.307111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.307253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.307277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.307415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.307440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.307611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.307636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.307746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.307771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.307909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.307940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.308132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.308158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.308272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.308297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.308412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.308437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.308596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.308621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.308804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.308832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.308994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.309019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.309131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.309157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.288 qpair failed and we were unable to recover it. 00:33:42.288 [2024-07-24 09:19:20.309294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.288 [2024-07-24 09:19:20.309319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.309525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.309580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.309739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.309763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.309872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.309898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.310050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.310078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.310261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.310286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.310402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.310427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.310560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.310585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.310720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.310744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.310895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.310922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.311056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.311086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.311253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.311279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.311499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.311556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.311793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.311844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.312026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.312051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.312206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.312235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.312381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.312408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.312593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.312618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.312774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.312802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.312952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.312979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.313141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.313166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.313310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.313335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.313528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.313581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.313746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.313770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.313913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.313938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.314092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.314144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.314272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.314300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.314481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.314510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.314661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.314689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.314850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.314877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.315036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.315065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.289 qpair failed and we were unable to recover it. 00:33:42.289 [2024-07-24 09:19:20.315229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.289 [2024-07-24 09:19:20.315255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.315400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.315425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.315561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.315587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.315700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.315726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.315896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.315921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.316045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.316073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.316274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.316300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.316433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.316458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.316620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.316648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.316767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.316796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.316925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.316950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.317085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.317119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.317270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.317296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.317407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.317432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.317580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.317624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.317785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.317811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.317975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.318000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.318157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.318185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.290 qpair failed and we were unable to recover it. 00:33:42.290 [2024-07-24 09:19:20.318301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.290 [2024-07-24 09:19:20.318328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.318486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.318517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.318638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.318663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.318780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.318806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.318940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.318965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.319145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.319174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.319321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.319351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.319486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.319513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.319653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.319678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.319838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.319865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.320048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.320074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.320221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.320247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.320355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.320380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.320521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.320546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.320702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.320730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.320880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.320907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.321059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.321084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.321256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.321281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.321469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.321497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.321646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.321672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.321852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.321880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.322005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.322034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.322167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.322193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.322333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.322358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.322517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.322545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.322684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.322709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.322880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.322921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.323047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.323076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.323216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.323241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.323411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.323464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.323621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.323653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.323815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.323842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.323973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.323998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.324145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.324203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.324345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.324371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.324490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.324515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.324660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.324685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.291 [2024-07-24 09:19:20.324884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.291 [2024-07-24 09:19:20.324909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.291 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.325041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.325084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.325259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.325284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.325424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.325449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.325560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.325590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.325744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.325772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.325902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.325928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.326084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.326116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.326236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.326260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.326394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.326419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.326562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.326587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.326725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.326749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.326890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.326914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.327077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.327126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.327265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.327290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.327431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.327456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.327586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.327611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.327755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.327780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.292 [2024-07-24 09:19:20.327929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.292 [2024-07-24 09:19:20.327958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.292 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.328094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.328144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.328262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.328286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.328426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.328451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.328598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.328626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.328762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.328787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.328931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.328956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.329963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.329989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.330152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.330279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.330439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.330582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.330714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.330871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.330978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.331003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.331143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.331169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.331273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.331297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.331409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.331434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.331572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.575 [2024-07-24 09:19:20.331597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.575 qpair failed and we were unable to recover it. 00:33:42.575 [2024-07-24 09:19:20.331737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.331761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.331898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.331923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.332958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.332982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.333088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.333118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.333234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.333259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.333393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.333418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.333528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.333568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.333714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.333741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.333900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.333925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.334958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.334985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.335109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.335155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.335295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.335319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.335426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.335450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.335585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.335610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.335748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.335773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.335933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.335960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.336077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.336111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.336251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.336281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.336423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.336464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.336615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.336643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.336803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.336830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.336938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.336963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.337171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.337215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.337363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.337529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.337555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.576 [2024-07-24 09:19:20.337718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.576 [2024-07-24 09:19:20.337760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.576 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.337894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.337918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.338080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.338116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.338239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.338264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.338401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.338426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.338559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.338584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.338733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.338759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.338950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.338977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.339133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.339177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.339294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.339319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.339433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.339457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.339593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.339618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.339792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.339819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.339969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.339993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.340131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.340174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.340337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.340362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.340471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.340495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.340659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.340683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.340821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.340846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.340989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.341020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.341177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.341205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.341355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.341382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.341529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.341554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.341695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.341720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.341859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.341883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.342061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.342086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.342244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.342269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.342429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.342456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.342585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.342610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.342725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.342750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.342911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.342940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.343118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.343143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.343308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.343350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.343514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.343541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.343708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.343732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.343869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.343893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.344060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.344088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.344255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.344279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.577 qpair failed and we were unable to recover it. 00:33:42.577 [2024-07-24 09:19:20.344418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.577 [2024-07-24 09:19:20.344443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.344622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.344648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.344783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.344808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.344991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.345018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.345193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.345222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.345375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.345400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.345577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.345604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.345763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.345789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.345951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.345980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.346125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.346175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.346343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.346370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.346536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.346561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.346676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.346701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.346849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.346874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.347014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.347038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.347179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.347204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.347370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.347412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.347542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.347567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.347726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.347766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.347951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.347975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.348174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.348200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.348339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.348364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.348514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.348543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.348684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.348709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.348846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.348871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.349025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.349052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.349213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.349239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.349340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.349365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.349521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.349546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.349680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.349705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.349838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.349879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.350031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.350058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.350198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.350223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.350366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.350390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.350509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.350534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.350666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.350695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.350849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.350876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.351029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.351056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.578 qpair failed and we were unable to recover it. 00:33:42.578 [2024-07-24 09:19:20.351226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.578 [2024-07-24 09:19:20.351251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.351356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.351396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.351516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.351544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.351734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.351758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.351938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.351965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.352114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.352142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.352272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.352298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.352433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.352458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.352624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.352652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.352785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.352809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.352969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.352994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.353214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.353253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.353400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.353426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.353561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.353604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.353722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.353750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.353929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.353957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.354149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.354175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.354293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.354319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.354460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.354486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.354638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.354666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.354848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.354874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.354992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.355016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.355155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.355195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.355342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.355369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.355523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.355554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.355688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.355713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.355876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.356018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.356043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.356221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.356250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.356401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.356428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.356627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.356652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.356768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.356793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.356927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.356951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.579 [2024-07-24 09:19:20.357069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.579 [2024-07-24 09:19:20.357095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.579 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.357268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.357293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.357452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.357481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.357643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.357668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.357788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.357813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.357964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.357996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.358183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.358209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.358352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.358377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.358511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.358536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.358674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.358699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.358812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.358838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.358968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.358997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.359183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.359208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.359343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.359368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.359506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.359531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.359645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.359670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.359850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.359877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.360058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.360085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.360251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.360280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.360419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.360444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.360564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.360589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.360707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.360732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.360868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.360909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.361033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.361060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.361239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.361265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.361379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.361404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.361554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.361581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.361745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.361770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.361928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.361955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.362089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.362121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.362231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.362256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.362415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.362458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.362592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.362619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.362807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.362831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.362991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.363018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.363200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.363225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.580 qpair failed and we were unable to recover it. 00:33:42.580 [2024-07-24 09:19:20.363334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.580 [2024-07-24 09:19:20.363358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.363532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.363556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.363697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.363724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.363882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.363907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.364087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.364121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.364305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.364329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.364445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.364470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.364607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.364648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.364828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.364856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.365037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.365064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.365241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.365266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.365410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.365435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.365577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.365602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.365753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.365781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.365935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.365962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.366095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.366125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.366234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.366259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.366403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.366427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.366561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.366586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.366722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.366746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.366895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.366934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.367081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.367113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.367231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.367259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.367400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.367426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.367628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.367652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.367761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.367786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.367949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.367974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.368125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.368151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.368318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.368361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.368505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.368533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.368676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.368702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.368837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.368862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.368983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.369010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.369136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.369161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.369304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.369329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.369473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.369500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.369655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.369679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.369805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.369848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.581 [2024-07-24 09:19:20.370025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.581 [2024-07-24 09:19:20.370053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.581 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.370238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.370264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.370418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.370446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.370593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.370621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.370780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.370805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.370967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.370995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.371137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.371191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.371361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.371387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.371550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.371578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.371756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.371785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.371923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.371949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.372091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.372149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.372313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.372339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.372479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.372504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.372620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.372662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.372815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.372842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.372998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.373023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.373163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.373206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.373343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.373368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.373534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.373558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.373743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.373770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.373920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.373948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.374081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.374111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.374251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.374276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.374388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.374412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.374549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.374574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.374714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.374739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.374847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.374872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.375031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.375056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.375183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.375209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.375336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.375374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.375536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.375562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.375669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.375695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.375856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.375881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.376118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.376166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.376334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.376359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.376476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.376501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.376686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.376711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.376824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.582 [2024-07-24 09:19:20.376849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.582 qpair failed and we were unable to recover it. 00:33:42.582 [2024-07-24 09:19:20.376988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.377013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.377193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.377218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.377381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.377409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.377570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.377597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.377785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.377810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.377922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.377948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.378108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.378133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.378271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.378296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.378407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.378451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.378606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.378634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.378794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.378818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.378978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.379006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.379151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.379179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.379362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.379387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.379576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.379603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.379763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.379788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.379903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.379929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.380066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.380091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.380292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.380331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.380507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.380532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.380695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.380723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.380873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.380901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.381078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.381118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.381297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.381323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.381458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.381483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.381647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.381671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.381786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.381811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.381959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.381987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.382099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.382130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.382243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.382269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.382410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.382436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.382615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.382640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.382745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.382784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.382943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.382974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.383155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.383181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.383334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.383362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.383523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.383550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.383688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.583 [2024-07-24 09:19:20.383712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.583 qpair failed and we were unable to recover it. 00:33:42.583 [2024-07-24 09:19:20.383850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.383891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.384009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.384037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.384177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.384208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.384387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.384415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.384566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.384594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.384745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.384770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.384886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.384911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.385028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.385054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.385168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.385195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.385359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.385399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.385528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.385555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.385742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.385767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.385877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.385919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.386084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.386117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.386276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.386301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.386420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.386447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.386621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.386661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.386852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.386877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.387005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.387031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.387213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.387253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.387406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.387434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b9/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3925018 Killed "${NVMF_APP[@]}" "$@" 00:33:42.584 0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.387612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.387640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.387786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.387815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.387952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.387979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.388153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:33:42.584 [2024-07-24 09:19:20.388182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.388289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.388314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:33:42.584 [2024-07-24 09:19:20.388454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.388481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.388635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.388665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:42.584 [2024-07-24 09:19:20.388813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.388843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.388985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.389011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.584 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:42.584 qpair failed and we were unable to recover it. 00:33:42.584 [2024-07-24 09:19:20.389158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.584 [2024-07-24 09:19:20.389194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.585 [2024-07-24 09:19:20.389343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.389381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.389538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.389565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.389704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.389729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.389845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.389870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.390032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.390058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.390226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.390252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.390391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.390416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.390528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.390553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.390693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.390718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.390908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.390944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.391194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.391224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.391387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.391416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.391546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.391574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.391733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.391759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.391923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.391948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.392131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.392174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.392296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.392322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.392442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.392469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.392640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.392683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.392839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.392865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.393052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.393080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.393227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.393252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.393365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.393408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.393584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.393610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 [2024-07-24 09:19:20.393745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.393769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3925570 00:33:42.585 [2024-07-24 09:19:20.393945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:33:42.585 [2024-07-24 09:19:20.393971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3925570 00:33:42.585 [2024-07-24 09:19:20.394135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.394190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3925570 ']' 00:33:42.585 [2024-07-24 09:19:20.394358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.394384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:42.585 [2024-07-24 09:19:20.394522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.394548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:42.585 [2024-07-24 09:19:20.394687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.394730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:42.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:42.585 [2024-07-24 09:19:20.394882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.394910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:42.585 [2024-07-24 09:19:20.395050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.395082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.585 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.585 [2024-07-24 09:19:20.395258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.585 [2024-07-24 09:19:20.395284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.585 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.395461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.395488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.395672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.395697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.395854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.395882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.396033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.396063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.396236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.396267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.396404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.396429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.396546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.396572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.396710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.396736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.396873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.396898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.397034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.397060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.397214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.397241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.397374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.397406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.397542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.397571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.397733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.397759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.397899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.397925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.398060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.398089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.398263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.398288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.398424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.398449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.398597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.398623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.398786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.398812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.398977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.399003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.399120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.399146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.399331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.399537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.399565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.399729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.399754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.399900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.399926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.400112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.400149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.400305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.400330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.400461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.400487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.400622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.400670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.400819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.400847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.401012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.401037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.401149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.401175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.401291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.401317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.401454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.401479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.401603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.401647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.401824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.401852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.586 [2024-07-24 09:19:20.402029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.586 [2024-07-24 09:19:20.402058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.586 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.402206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.402238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.402385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.402410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.402546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.402572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.402729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.402757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.402934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.402962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.403119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.403146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.403290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.403331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.403460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.403489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.403675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.403701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.403882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.403910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.404084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.404124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.404291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.404317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.404479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.404504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.404718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.404779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.404936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.404962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.405144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.405188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.405306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.405331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.405496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.405521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.405632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.405675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.405798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.405826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.405981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.406006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.406152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.406179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.406292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.406318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.406482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.406507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.406631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.406660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.406786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.406814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.406985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.407013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.407228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.407267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.407418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.407446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.407626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.407652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.407758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.407782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.408052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.408111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.408264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.408290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.408450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.408475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.408586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.408612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.408732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.408757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.408875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.587 [2024-07-24 09:19:20.408902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.587 qpair failed and we were unable to recover it. 00:33:42.587 [2024-07-24 09:19:20.409066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.409115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.409249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.409274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.409387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.409414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.409559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.409591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.409710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.409735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.409850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.409877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.410070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.410098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.410236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.410263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.410405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.410430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.410589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.410614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.410722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.410747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.410909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.410934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.411048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.411089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.411279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.411305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.411456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.411485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.411731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.411783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.411919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.411944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.412064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.412092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.412226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.412252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.412392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.412417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.412529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.412570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.412830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.412882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.413052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.413077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.413221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.413249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.413370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.413395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.413535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.413560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.413702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.413727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.413858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.413885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.414033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.414058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.414202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.414228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.414367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.414396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.414535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.414560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.414664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.414689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.588 [2024-07-24 09:19:20.414889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.588 [2024-07-24 09:19:20.414917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.588 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.415048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.415073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.415217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.415243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.415421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.415449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.415604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.415630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.415769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.415793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.415922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.415947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.416126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.416178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.416319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.416344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.416455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.416479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.416650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.416675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.416819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.416845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.416987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.417029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.417165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.417190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.417321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.417346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.417523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.417548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.417708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.417733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.417889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.417918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.418093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.418126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.418286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.418311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.418441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.418480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.418678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.418708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.418862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.418888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.419028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.419054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.419196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.419227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.419365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.419390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.419530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.419555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.419701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.419726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.419861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.419886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.420046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.420077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.420240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.420265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.420401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.420426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.589 qpair failed and we were unable to recover it. 00:33:42.589 [2024-07-24 09:19:20.420533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.589 [2024-07-24 09:19:20.420558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.420696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.420720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.420882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.420907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.421026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.421050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.421189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.421215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.421348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.421373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.421542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.421570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.421733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.421758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.421917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.421942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.422113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.422141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.422296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.422322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.422463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.422488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.422626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.422652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.422781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.422809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.422991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.423016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.423162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.423188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.423330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.423355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.423562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.423587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.423740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.423768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.423932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.423963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.424107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.424132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.424248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.424273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.424460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.424488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.424649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.424673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.424843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.424869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.425030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.425251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.425422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.425561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.425696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.425847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.425998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.426026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.426197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.426223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.590 [2024-07-24 09:19:20.426337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.590 [2024-07-24 09:19:20.426379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.590 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.426554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.426582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.426711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.426736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.426895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.426920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.427049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.427077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.427245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.427270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.427414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.427439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.427580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.427621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.427809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.427834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.427942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.427984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.428141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.428181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.428297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.428321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.428426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.428451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.428631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.428663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.428789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.428814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.428987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.429030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.429200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.429226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.429335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.429359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.429502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.429527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.429701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.429728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.429895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.429919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.430063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.430087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.430232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.430257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.430394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.430418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.430601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.430628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.430779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.430806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.430984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.431008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.431162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.431201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.431371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.431426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.431557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.431582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.431719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.431744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.431900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.431925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.432066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.432092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.432220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.432246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.432359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.432399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.432584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.432608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.432743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.432768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.432930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.591 [2024-07-24 09:19:20.432972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.591 qpair failed and we were unable to recover it. 00:33:42.591 [2024-07-24 09:19:20.433125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.433150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.433320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.433345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.433520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.433545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.433687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.433711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.433871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.433896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.434047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.434075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.434235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.434261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.434373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.434397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.434560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.434588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.434743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.434769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.434883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.434908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.435039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.435066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.435229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.435255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.435417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.435441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.435583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.435607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.435746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.435772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.435946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.435989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.436184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.436213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.436355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.436381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.436578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.436606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.436732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.436760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.436945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.436970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.437120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.437149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.437304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.437329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.437464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.437489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.437670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.437698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.437824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.437852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.437987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.438013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.438172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.438211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.438334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.438365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.438531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.438556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.438662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.438686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.438824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.592 [2024-07-24 09:19:20.438849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.592 qpair failed and we were unable to recover it. 00:33:42.592 [2024-07-24 09:19:20.438983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.439007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.439156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.439182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.439319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.439343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.439522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.439546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.439656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.439681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.439816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.439857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.440006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.440032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.440193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.440219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.440351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.440375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.440536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.440560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.440701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.440729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.440855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.440881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.441042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.441066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.441224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.441250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.441366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.441390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.441531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.441556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.441699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.441741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.441773] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:42.593 [2024-07-24 09:19:20.441851] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:42.593 [2024-07-24 09:19:20.441892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.441919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.442107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.442130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.442273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.442298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.442422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.442448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.442610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.442634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.442766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.442795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.442954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.442984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.443149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.443174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.443283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.443308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.443458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.443485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.443658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.443682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.443828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.443854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.444019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.444061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.444226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.444250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.444361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.444386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.444553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.444581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.444736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.444760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.444920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.444944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.445129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.445173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.445315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.593 [2024-07-24 09:19:20.445340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.593 qpair failed and we were unable to recover it. 00:33:42.593 [2024-07-24 09:19:20.445453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.445478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.445613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.445638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.445758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.445782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.445944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.445969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.446131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.446159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.446294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.446318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.446482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.446524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.446688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.446714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.446852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.446879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.446987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.447013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.447179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.447208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.447365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.447390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.447529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.447574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.447694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.447722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.447879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.447904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.448042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.448068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.448186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.448211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.448370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.448394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.448498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.448523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.448694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.448734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.448893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.448918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.449117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.449161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.449303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.449329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.449465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.449489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.449600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.449625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.449761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.449785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.449966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.450007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.594 qpair failed and we were unable to recover it. 00:33:42.594 [2024-07-24 09:19:20.450152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.594 [2024-07-24 09:19:20.450181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.450323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.450350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.450510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.450553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.450686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.450729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.450854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.450881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.451060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.451098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.451254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.451282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.451444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.451469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.451619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.451646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.451822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.451850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.452026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.452053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.452198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.452225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.452367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.452398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.452537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.452583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.452737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.452779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.452888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.452915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.453058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.453083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.453228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.453253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.453386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.453411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.453525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.453552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.453739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.453782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.453925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.453950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.454090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.454125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.454318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.454362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.454522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.454566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.454750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.454792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.454934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.454959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.455088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.455141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.455338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.455380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.455517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.455560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.455682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.455725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.455861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.455887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.455995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.456020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.456194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.456238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.456375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.456403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.456546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.456588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.456700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.456725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.456864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.456890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.595 qpair failed and we were unable to recover it. 00:33:42.595 [2024-07-24 09:19:20.457044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.595 [2024-07-24 09:19:20.457083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.457273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.457312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.457483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.457510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.457650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.457676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.457784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.457809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.457924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.457950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.458125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.458152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.458267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.458293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.458456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.458484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.458689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.458718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.458868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.458896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.459046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.459074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.459238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.459265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.459418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.459446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.459609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.459637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.459839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.459868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.460045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.460073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.460216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.460244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.460401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.460429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.460575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.460603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.460779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.460807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.460934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.460976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.461159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.461198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.461340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.461367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.461546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.461574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.461703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.461731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.461906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.461933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.462110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.462149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.462277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.462304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.462490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.462518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.462670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.462698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.462850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.462879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.463040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.463065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.463230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.463255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.463366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.463391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.463530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.463556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.463695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.463720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.463859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.596 [2024-07-24 09:19:20.463887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.596 qpair failed and we were unable to recover it. 00:33:42.596 [2024-07-24 09:19:20.464065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.464092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.464278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.464317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.464473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.464512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.464695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.464728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.464881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.464924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.465063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.465090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.465209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.465236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.465356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.465385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.465559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.465602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.465791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.465833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.465958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.465985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.466105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.466132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.466275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.466302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.466435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.466464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.466617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.466645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.466820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.466848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.467009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.467035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.467173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.467212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.467348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.467377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.467530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.467558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.467717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.467745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.467865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.467892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.468055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.468083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.468249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.468276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.468412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.468457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.468588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.468631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.468787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.468830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.468995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.469020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.469168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.469208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.469333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.469359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.469541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.469578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.469798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.469855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.469965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.469992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.470151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.470180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.470354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.470382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.470569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.470612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.470757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.470784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.470925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.597 [2024-07-24 09:19:20.470951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.597 qpair failed and we were unable to recover it. 00:33:42.597 [2024-07-24 09:19:20.471136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.471181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.471317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.471361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.471495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.471538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.471702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.471727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.471833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.471859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.471973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.472003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.472188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.472233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.472386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.472428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.472559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.472585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.472749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.472775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.472909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.472935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.473072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.473098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.473245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.473271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.473400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.473427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.473545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.473573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.473716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.473743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.473884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.473911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.474022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.474049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.474188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.474215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.474386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.474412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.474515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.474541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.474672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.474698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.474863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.474889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.475011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.475038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.475154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.475181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.475342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.475380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.475552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.475579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.475691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.475716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.475880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.475906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.476025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.476051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.476189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.476215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.476363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.476391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.476532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.476565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.476716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.476744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 EAL: No free 2048 kB hugepages reported on node 1 00:33:42.598 [2024-07-24 09:19:20.476884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.598 [2024-07-24 09:19:20.476912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.598 qpair failed and we were unable to recover it. 00:33:42.598 [2024-07-24 09:19:20.477027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.477054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.477223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.477267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.477398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.477441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.477652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.477696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.477804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.477830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.477973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.477998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.478110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.478136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.478298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.478342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.478501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.478544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.478677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.478702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.478813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.478845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.478984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.479010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.479167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.479210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.479375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.479417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.479577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.479622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.479734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.479760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.479866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.479892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.480966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.480991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.481136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.481130] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:42.599 [2024-07-24 09:19:20.481162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.481301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.481327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.481462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.481488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.481623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.481649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.481783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.481808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.481921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.481948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.482087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.482121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.482264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.599 [2024-07-24 09:19:20.482290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.599 qpair failed and we were unable to recover it. 00:33:42.599 [2024-07-24 09:19:20.482396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.482421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.482532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.482557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.482694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.482720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.482854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.482879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.483045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.483234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.483422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.483562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.483752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.483879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.483987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.484127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.484272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.484423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.484589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.484776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.484916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.484943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.485091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.485140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.485310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.485349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.485472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.485499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.485644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.485669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.485811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.485836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.485950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.485975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.486112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.486137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.486311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.486335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.486462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.486487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.486626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.486650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.486767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.486793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.486956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.486981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.487122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.487148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.487263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.487288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.487430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.487463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.487581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.487607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.487718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.487743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.487881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.487905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.488041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.488065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.488204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.488243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.600 [2024-07-24 09:19:20.488390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.600 [2024-07-24 09:19:20.488417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.600 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.488532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.488557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.488691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.488716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.488858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.488883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.488993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.489159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.489325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.489458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.489595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.489755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.489916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.489940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.490050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.490077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.490227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.490253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.490397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.490424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.490567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.490592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.490727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.490753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.490889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.490915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.491058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.491084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.491211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.491237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.491378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.491403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.491516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.491542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.491650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.491679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.491827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.491866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.492046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.492217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.492382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.492546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.492685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.492846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.492986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.493124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.493261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.493418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.493578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.493744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.493880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.493906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.494048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.494072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.494236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.601 [2024-07-24 09:19:20.494264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.601 qpair failed and we were unable to recover it. 00:33:42.601 [2024-07-24 09:19:20.494374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.494399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.494538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.494563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.494725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.494751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.494892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.494918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.495081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.495124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.495267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.495292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.495409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.495434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.495567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.495592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.495702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.495727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.495862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.495886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.496002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.496030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.496186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.496225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.496372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.496399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.496535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.496560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.496697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.496722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.496868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.496893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.497002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.497026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.497202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.497230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.497400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.497425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.497561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.497588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.497704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.497729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.497869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.497894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.498897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.498923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.499118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.499268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.499397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.499555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.499719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.499860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.499991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.500016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.500151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.500177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.500317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.602 [2024-07-24 09:19:20.500343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.602 qpair failed and we were unable to recover it. 00:33:42.602 [2024-07-24 09:19:20.500459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.500484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.500602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.500627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.500766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.500792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.500943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.500967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.501112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.501138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.501279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.501304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.501441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.501466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.501639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.501664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.501782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.501809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.501963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.502002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.502192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.502231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.502351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.502377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.502513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.502544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.502681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.502706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.502822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.502847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.503930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.503955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.504072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.504270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.504295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.504424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.504450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.504564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.504589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.504752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.504777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.504881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.504906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.505043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.505068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.505187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.505215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.505317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.505342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.505561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.603 [2024-07-24 09:19:20.505585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.603 qpair failed and we were unable to recover it. 00:33:42.603 [2024-07-24 09:19:20.505702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.505727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.505834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.505859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.505991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.506030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.506201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.506228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.506391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.506417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.506533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.506560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.506680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.506705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.506838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.506870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.507952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.507977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.508094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.508128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.508242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.508268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.508407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.508432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.508569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.508594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.508728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.508754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.508863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.508889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.509046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.509082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.509230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.509268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.509447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.509486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.509632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.509658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.509778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.509805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.509950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.509976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.510122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.510264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.510403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.510567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.510699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:42.604 [2024-07-24 09:19:20.510839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.510863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.510999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.511025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.511178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.511204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.511325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.511350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.511500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.511527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.604 [2024-07-24 09:19:20.511664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.604 [2024-07-24 09:19:20.511689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.604 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.511798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.511824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.511941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.511966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.512099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.512130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.512296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.512321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.512432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.512457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.512593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.512618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.512791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.512816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.512932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.512957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.513095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.513130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.513278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.513304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.513438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.513464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.513597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.513623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.513763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.513789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.513900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.513926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.514089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.514237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.514559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.514717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.514882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.514999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.515195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.515349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.515501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.515667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.515800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.515950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.515976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.516138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.516165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.516306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.516331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.516496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.516521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.516668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.516696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.516803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.516828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.516971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.516996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.517112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.517139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.517302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.517328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.517468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.517493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.517658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.605 [2024-07-24 09:19:20.517683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.605 qpair failed and we were unable to recover it. 00:33:42.605 [2024-07-24 09:19:20.517821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.517847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.517963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.517988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.518166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.518192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.518299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.518326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.518447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.518473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.518614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.518641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.518779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.518805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.518994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.519019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.519171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.519211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.519351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.519392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.519509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.519536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.519676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.519703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.519896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.519922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.520063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.520263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.520406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.520574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.520732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.520888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.520998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.521023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.521163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.521192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.521333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.521358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.521474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.521500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.521616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.521642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.521805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.521831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.521970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.522001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.522181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.522208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.522366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.522405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.522550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.522577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.522743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.522769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.522908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.522933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.523097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.523129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.523250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.523276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.523394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.523420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.523582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.523607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.523722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.523750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.523869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.523898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.524035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.606 [2024-07-24 09:19:20.524060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.606 qpair failed and we were unable to recover it. 00:33:42.606 [2024-07-24 09:19:20.524177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.524205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.524353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.524379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.524547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.524573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.524738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.524763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.524902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.524928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.525063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.525089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.525230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.525268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.525394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.525422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.525567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.525593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.525738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.525764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.525905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.525930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.526068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.526094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.526221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.526248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.526367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.526393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.526545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.526577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.526740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.526766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.526901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.526926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.527031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.527057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.527211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.527251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.527371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.527398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.527543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.527571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.527735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.527761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.527903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.527928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.528067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.528092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.528249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.528276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.528413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.528439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.528549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.528574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.528719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.528744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.528902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.528927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.529065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.529091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.529205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.529231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.529393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.529418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.529561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.529586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.529723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.529752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.529904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.529943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.530062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.530090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.530239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.530266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.530383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.607 [2024-07-24 09:19:20.530410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.607 qpair failed and we were unable to recover it. 00:33:42.607 [2024-07-24 09:19:20.530550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.530575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.530692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.530717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.530864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.530891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.531952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.531977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.532091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.532128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.532251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.532276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.532416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.532443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.532586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.532611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.532742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.532767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.532881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.532908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.533043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.533069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.533236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.533275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.533396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.533423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.533563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.533588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.533723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.533749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.533856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.533882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.534060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.534085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.534214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.534242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.534352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.534378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.534494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.534519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.534668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.534694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.534833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.534859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.535014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.535053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.535218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.535257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.535444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.535481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.535639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.608 [2024-07-24 09:19:20.535665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.608 qpair failed and we were unable to recover it. 00:33:42.608 [2024-07-24 09:19:20.535804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.535830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.535949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.535974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.536083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.536112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.536278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.536302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.536412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.536436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.536600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.536625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.536736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.536760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.536893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.536917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.537036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.537063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.537190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.537218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.537358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.537383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.537542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.537573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.537740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.537766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.537938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.537964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.538098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.538131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.538243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.538269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.538391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.538416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.538577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.538602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.538743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.538768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.538885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.538910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.539972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.539997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.540150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.540176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.540361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.540399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.540570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.540597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.540735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.540761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.540874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.540899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.541043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.541186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.541352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.541495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.541664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.609 [2024-07-24 09:19:20.541852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.609 qpair failed and we were unable to recover it. 00:33:42.609 [2024-07-24 09:19:20.541988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.542160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.542349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.542512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.542640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.542775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.542940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.542966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.543110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.543137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.543251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.543276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.543392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.543417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.543576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.543602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.543741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.543767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.543909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.543934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.544067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.544244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.544431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.544565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.544708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.544873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.544985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.545156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.545293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.545430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.545590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.545748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.545894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.545919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.546048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.546212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.546349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.546515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.546672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.546842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.546979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.547004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.547166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.547192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.547307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.547332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.547443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.547469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.547605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.547629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.610 [2024-07-24 09:19:20.547772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.610 [2024-07-24 09:19:20.547798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.610 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.547933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.547957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.548068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.548093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.548232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.548257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.548379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.548408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.548575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.548600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.548742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.548767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.548906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.548931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.549041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.549066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.549241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.549268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.549414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.549442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.549579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.549605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.549726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.549751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.549863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.549889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.550055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.550081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.550195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.550221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.550386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.550412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.550553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.550584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.550729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.550753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.550893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.550919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.551059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.551229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.551368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.551534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.551695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.551882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.551990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.552126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.552267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.552403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.552549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.552719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.552874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.552899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.553042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.553067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.553187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.553213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.553352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.553379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.553525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.553550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.553667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.553695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.611 [2024-07-24 09:19:20.553806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.611 [2024-07-24 09:19:20.553832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.611 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.553945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.553971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.554139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.554165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.554294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.554318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.554434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.554459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.554577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.554603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.554719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.554749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.554883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.554908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.555047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.555072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.555267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.555293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.555430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.555455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.555570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.555596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.555712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.555738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.555871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.555896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.556953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.556993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.557166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.557195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.557310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.557335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.557446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.557472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.557612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.557636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.557788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.557813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.557929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.557954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.558114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.558139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.558274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.558299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.558434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.558459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.558572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.558596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.558741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.558767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.558930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.558955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.559117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.559154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.612 [2024-07-24 09:19:20.559319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.612 [2024-07-24 09:19:20.559350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.612 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.559510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.559538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.559650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.559676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.559793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.559819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.559928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.559953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.560093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.560123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.560262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.560287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.560435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.560460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.560572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.560597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.560760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.560785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.560900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.560926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.561115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.561157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.561282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.561310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.561508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.561533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.561676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.561701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.561868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.561893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.562054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.562252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.562413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.562550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.562690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.562859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.562995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.563020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.563176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.563216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.563342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.563370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.563520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.563546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.563691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.563717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.563855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.563882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.564053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.564198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.564394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.564530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.564697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.564864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.564976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.565001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.565167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.565196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.613 [2024-07-24 09:19:20.565339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.613 [2024-07-24 09:19:20.565364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.613 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.565500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.565526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.565663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.565688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.565817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.565858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.565989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.566017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.566153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.566181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.566353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.566379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.566522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.566548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.566663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.566690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.566809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.566835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.566979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.567147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.567307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.567499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.567631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.567798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.567961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.567986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.568112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.568137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.568301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.568326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.568464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.568489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.568638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.568664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.568777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.568803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.568941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.568965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.569109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.569140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.569284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.569310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.569427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.569452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.569596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.569621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.569757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.569783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.569890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.569915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.570053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.570079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.570259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.570290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.570428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.570454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.570589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.570615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.570754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.570779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.570887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.570913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.571021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.571046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.571193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.571219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.571331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.614 [2024-07-24 09:19:20.571355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.614 qpair failed and we were unable to recover it. 00:33:42.614 [2024-07-24 09:19:20.571523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.571549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.571684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.571710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.571862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.571887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.572924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.572951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.573067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.573092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.573222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.573249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.573386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.573411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.573574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.573598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.573761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.573786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.573922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.573946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.574062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.574087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.574221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.574247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.574359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.574385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.574508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.574534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.574676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.574700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.574840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.574865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.575958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.575983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.576971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.576998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.577130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.577169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.577300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.615 [2024-07-24 09:19:20.577329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.615 qpair failed and we were unable to recover it. 00:33:42.615 [2024-07-24 09:19:20.577448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.577475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.577642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.577667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.577801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.577826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.577940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.577966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.578075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.578107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.578261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.578286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.578398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.578423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.578563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.578588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.578731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.578756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.578895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.578919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.579022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.579047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.579204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.579244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.579366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.579393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.579543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.579568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.579731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.579756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.579898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.579923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.580062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.580086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.580217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.580246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.580411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.580437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.580551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.580575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.580714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.580739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.580869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.580905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.581076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.581121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.581271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.581298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.581415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.581442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.581583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.581610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.581751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.581776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.581922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.581950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.582094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.582134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.582271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.582295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.582435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.582460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.582619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.582645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.582782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.582807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.582946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.582974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.583082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.583120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.583270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.583296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.583429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.583455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.583595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.616 [2024-07-24 09:19:20.583620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.616 qpair failed and we were unable to recover it. 00:33:42.616 [2024-07-24 09:19:20.583755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.583780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.583890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.583918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.584056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.584081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.584234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.584260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.584417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.584443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.584579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.584604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.584743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.584768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.584933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.584960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.585973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.585998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.586140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.586167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.586281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.586307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.586472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.586498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.586632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.586657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.586817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.586845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.586986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.587013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.587151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.587178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.587312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.587337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.587462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.587494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.587662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.587701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.587849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.587877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.587992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.588017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.588139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.588283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.588308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.588444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.588469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.588607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.588632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.588780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.617 [2024-07-24 09:19:20.588808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.617 qpair failed and we were unable to recover it. 00:33:42.617 [2024-07-24 09:19:20.588927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.588953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.589117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.589146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.589294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.589320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.589457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.589483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.589595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.589622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.589773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.589800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.589937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.589963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.590111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.590137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.590258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.590283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.590446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.590471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.590581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.590606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.590753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.590778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.590916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.590943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.591083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.591119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.591268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.591294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.591439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.591464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.591604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.591630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.591744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.591770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.591885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.591910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.592940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.592965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.593066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.593092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.593239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.593265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.593388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.593413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.593519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.593544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.593716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.593741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.593877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.593901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.594048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.594208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.594370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.594534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.594676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.618 [2024-07-24 09:19:20.594814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.618 qpair failed and we were unable to recover it. 00:33:42.618 [2024-07-24 09:19:20.594961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.594987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.595100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.595137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.595301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.595326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.595436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.595462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.595597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.595623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.595740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.595766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.595872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.595898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.597300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:42.619 [2024-07-24 09:19:20.597334] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:42.619 [2024-07-24 09:19:20.597349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:42.619 [2024-07-24 09:19:20.597361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:42.619 [2024-07-24 09:19:20.597372] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:42.619 [2024-07-24 09:19:20.597477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:33:42.619 [2024-07-24 09:19:20.597565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:33:42.619 [2024-07-24 09:19:20.597668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:33:42.619 [2024-07-24 09:19:20.597675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:33:42.619 [2024-07-24 09:19:20.598198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.598238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.598399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.598427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.598574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.598601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.598721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.598746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.598869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.598894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.599030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.599055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.599201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.599228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.599398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.599424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.599534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.599559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.599668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.599694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.599835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.599866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.600071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.600117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.600264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.600303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.600441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.600468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.600599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.600625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.600736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.600762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.600879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.600905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.601922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.601950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.602078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.602118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.602269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.602304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.602457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.602483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.602609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.619 [2024-07-24 09:19:20.602635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.619 qpair failed and we were unable to recover it. 00:33:42.619 [2024-07-24 09:19:20.602780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.602805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.602922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.602947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.603111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.603137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.603249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.603274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.603408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.603433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.603539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.603564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.603677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.603703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.603836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.603861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.604908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.604936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.605928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.605954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.606077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.606128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.606251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.606277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.606404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.606431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.606552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.606578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.606721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.606747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.606871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.606902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.607960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.607986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.608119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.608158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.608296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.608333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.608563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.620 [2024-07-24 09:19:20.608602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.620 qpair failed and we were unable to recover it. 00:33:42.620 [2024-07-24 09:19:20.608715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.608743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.608858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.608883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.609961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.609987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.610952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.610979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.611972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.611999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.612140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.612166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.612277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.612303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.612452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.612492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.612614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.612640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.612745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.612770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.612898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.612924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.613967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.613993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.614113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.621 [2024-07-24 09:19:20.614139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.621 qpair failed and we were unable to recover it. 00:33:42.621 [2024-07-24 09:19:20.614245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.614270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.614370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.614411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.614562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.614590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.614702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.614728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.614834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.614859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.614998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.615025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.615188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.615227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.615342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.615368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.615514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.615539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.615651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.615677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.615840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.615865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.615979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.616128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.616316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.616460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.616599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.616736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.616876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.616903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.617949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.617974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.618082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.618115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.618240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.618279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.618402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.618431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.622 [2024-07-24 09:19:20.618566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.622 [2024-07-24 09:19:20.618603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.622 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.618736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.618762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.618876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.618904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.619943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.619971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.620117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.620316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.620465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.620601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.620733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.620876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.620987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.621155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.621306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.621451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.621582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.621725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.621861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.621886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.622902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.622928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.623099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.623140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.623254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.623279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.623393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.623420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.623568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.623594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.623 [2024-07-24 09:19:20.623725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.623 [2024-07-24 09:19:20.623750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.623 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.623876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.623905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.624967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.624992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.625097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.625147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.625261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.625288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.625431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.625456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.625600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.625626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.625737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.625763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.625904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.625930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.626969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.626994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.627136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.627285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.627444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.627588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.627730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.627884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.627995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.628157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.628325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.628495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.628638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.628784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.628923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.628949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.629059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.629084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.629278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.629303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.629415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.624 [2024-07-24 09:19:20.629442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.624 qpair failed and we were unable to recover it. 00:33:42.624 [2024-07-24 09:19:20.629554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.629580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.629690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.629715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.629875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.629900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.630025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.630064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.630209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.630237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.630347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.630373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.630497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.630523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.630663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.630688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.630835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.630868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.631928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.631967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.632087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.632121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.632276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.632303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.632421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.632447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.632583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.632607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.632750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.632777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.632916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.632944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.633969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.633995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.634180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.634219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.634367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.634394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.634513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.634538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.634653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.634679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.634800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.634825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.634933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.634958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.635128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.635173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.635294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.625 [2024-07-24 09:19:20.635320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.625 qpair failed and we were unable to recover it. 00:33:42.625 [2024-07-24 09:19:20.635466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.635491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.635643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.635668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.635777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.635802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.635922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.635949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.636112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.636138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.636256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.636285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.636406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.636432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.636537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.636562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.636679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.636704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.636863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.636887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.637974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.637999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.638116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.638142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.638271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.638296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.638402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.638427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.638575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.638600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.638703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.638728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.638866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.638891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.639943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.639968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.640077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.640114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.640301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.640326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.640436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.640461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.640569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.640594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.640730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.640754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.640865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.640890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.641032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.641057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.641180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.626 [2024-07-24 09:19:20.641205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.626 qpair failed and we were unable to recover it. 00:33:42.626 [2024-07-24 09:19:20.641346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.641371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.641488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.641513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.641625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.641651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.641772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.641811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.641954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.641982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.642145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.642301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.642427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.642577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.642720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.642881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.642986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.643123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.643340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.643484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.643643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.643803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.643936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.643961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.644073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.644098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.644222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.644247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.644391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.644431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.644555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.644583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.644724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.644749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.644863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.644890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.645054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.645203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.645341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.645501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.645629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.627 [2024-07-24 09:19:20.645765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.627 qpair failed and we were unable to recover it. 00:33:42.627 [2024-07-24 09:19:20.645901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.645940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.646066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.646109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.646289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.646314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.646421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.646446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.646560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.646586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.646722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.646748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.646871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.646898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.647033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.647058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.647208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.647247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.647366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.647394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.647538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.647564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.647705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.647731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.647850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.647878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.648096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.648273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.648440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.648578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.648722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.648859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.648981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.649155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.649337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.649481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.649646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.649780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.649922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.649948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.650124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.650274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.650418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.650575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.650741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.650877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.650986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.651011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.651128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.651154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.651266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.651291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.651414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.651440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.628 [2024-07-24 09:19:20.651579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.628 [2024-07-24 09:19:20.651604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.628 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.651711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.651736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.651869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.651909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.652940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.652965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.653106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.653273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.653298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.653418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.653458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.653577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.653604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.653718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.653745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.653889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.653916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.654916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.654941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.655113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.655275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.655417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.655563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.655704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.655843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.655978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.656016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.656135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.656179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.656303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.656330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.656443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.656468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.656615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.656640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.629 [2024-07-24 09:19:20.656784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.629 [2024-07-24 09:19:20.656811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.629 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.656921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.656947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.657936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.657965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.658131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.658157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.658270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.658297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.658415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.658440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.658548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.658573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.658707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.658732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.658851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.658890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.659904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.659929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.660115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.660155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.660278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.660307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.660416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.660442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.660552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.660577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.660695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.660721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.660838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.660864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.661961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.661987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.662110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.662152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.662278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.662306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.662439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.662464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.630 [2024-07-24 09:19:20.662601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.630 [2024-07-24 09:19:20.662626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.630 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.662742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.662766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.662886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.662912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.663036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.663075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.663221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.663258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.663388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.663417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.663532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.663558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.663723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.663749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.663891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.663916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.664960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.664985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.665112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.665138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.665257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.665282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.665398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.665422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.665561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.665587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.665723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.665748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.665855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.665880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.666004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.666029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.666155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.666181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.666342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.666372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.666507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.666533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.666637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.666662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.631 [2024-07-24 09:19:20.666786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.631 [2024-07-24 09:19:20.666812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.631 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.666920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.666945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.667909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.667934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.668066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.668090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.668213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.668239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.668357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.668382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.668505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.668531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.668662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.903 [2024-07-24 09:19:20.668687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.903 qpair failed and we were unable to recover it. 00:33:42.903 [2024-07-24 09:19:20.668791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.668825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.668951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.668978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.669931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.670091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.670123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.670282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.670312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.670426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.670457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.670593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.670618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.670728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.670754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.670888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.670914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.671922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.671947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.672969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.672994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.673166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.673302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.673447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.673607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.673749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.673881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.673998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.674023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.674145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.674171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.904 qpair failed and we were unable to recover it. 00:33:42.904 [2024-07-24 09:19:20.674334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.904 [2024-07-24 09:19:20.674363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.674504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.674529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.674639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.674664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.674812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.674842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.674959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.674983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.675125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.675150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.675270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.675296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.675416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.675441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.675557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.675583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.675774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.675801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.675916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.675941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.676078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.676108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.676242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.676268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.676377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.676402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.676545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.676570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.676729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.676754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.676912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.676954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.677110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.677138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.677273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.677299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.677421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.677447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.677556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.677582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.677728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.677756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.677879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.677906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.678958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.678983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.679157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.679196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.679344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.679370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.679484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.679510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.679626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.679652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.679768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.679794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.679907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.679931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.680042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.680068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.905 qpair failed and we were unable to recover it. 00:33:42.905 [2024-07-24 09:19:20.680215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.905 [2024-07-24 09:19:20.680239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.680386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.680411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.680525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.680551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.680662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.680687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.680831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.680856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.680967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.680991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.681965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.681990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.682126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.682153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.682286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.682325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.682477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.682503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.682612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.682638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.682777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.682803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.682910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.682935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.683883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.683908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.684933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.684958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.685075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.685107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.685257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.685282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.685391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.685416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.685525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.906 [2024-07-24 09:19:20.685551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.906 qpair failed and we were unable to recover it. 00:33:42.906 [2024-07-24 09:19:20.685661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.685687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.685799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.685825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.685938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.685964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.686112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.686139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.686245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.686270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.686406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.686431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.686572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.686597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.686715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.686740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.686869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.686909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.687974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.687999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.688122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.688161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.688272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.688298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.688432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.688458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.688559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.688584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.688722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.688748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.688858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.688882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.689913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.689938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.690049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.690076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.690207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.690233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.690375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.690400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.690555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.690579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.690715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.690740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.690863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.690891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.691008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.691035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.691142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.907 [2024-07-24 09:19:20.691168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.907 qpair failed and we were unable to recover it. 00:33:42.907 [2024-07-24 09:19:20.691321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.691346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.691461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.691486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.691620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.691644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.691763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.691790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.691898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.691923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.692962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.692991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.693970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.693995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.694114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.694143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.694258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.694282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.694401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.694425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.694539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.694563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.694683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.694712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.694856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.694885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.695050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.695246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.695412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.695575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.695704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.908 [2024-07-24 09:19:20.695839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.908 qpair failed and we were unable to recover it. 00:33:42.908 [2024-07-24 09:19:20.695945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.695969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.696078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.696108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.696244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.696269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.696371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.696396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.696531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.696556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.696690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.696717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.696858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.696897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.697945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.697970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.698971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.698996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.699137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.699163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.699266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.699291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.699412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.699437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.699542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.699566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.699679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.699705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.699886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.699925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.700883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.700908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.701011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.701036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.701157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.701182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.701286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.701310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.909 qpair failed and we were unable to recover it. 00:33:42.909 [2024-07-24 09:19:20.701477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.909 [2024-07-24 09:19:20.701502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.701636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.701662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.701820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.701850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.701962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.701987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.702105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.702132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.702253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.702279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.702414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.702439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.702545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.702571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.702695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.702720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.702859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.702890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.703876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.703903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.704923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.704961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.705112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.705259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.705400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.705533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.705691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.705849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.705974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.706110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.706279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.706410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.706553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.706687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.706869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.706906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.910 [2024-07-24 09:19:20.707059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.910 [2024-07-24 09:19:20.707086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.910 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.707203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.707230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.707337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.707362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.707470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.707497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.707597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.707622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.707736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.707762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.707901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.707926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.708900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.708925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.709950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.709975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.710924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.710949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.711907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.711934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.712071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.712097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.712223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.712248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.911 [2024-07-24 09:19:20.712357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.911 [2024-07-24 09:19:20.712382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.911 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.712493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.712518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.712623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.712653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.712756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.712781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.712888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.712913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.713879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.713904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.714883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.714907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.715876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.715915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.716055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.716082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.716235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.716261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.716373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.716399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.716522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.716553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.716688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.716715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.716835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.716861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.717003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.717030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.717147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.717173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.717284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.717309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.717419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.717444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.912 qpair failed and we were unable to recover it. 00:33:42.912 [2024-07-24 09:19:20.717580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.912 [2024-07-24 09:19:20.717606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.717716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.717742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.717877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.717902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.718941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.718967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.719972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.719997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.720193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.720220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.720333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.720358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.720470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.720495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.720627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.720657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.720769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.720794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.720901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.720926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.721933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.721958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.722082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.722127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.722249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.722278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.722411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.722437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.722596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.722621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.913 [2024-07-24 09:19:20.722743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.913 [2024-07-24 09:19:20.722770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.913 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.722909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.722935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.723037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.723062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.723297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.723335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.723479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.723507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.723630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.723656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.723771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.723796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.723933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.723958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.724094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.724241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.724375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.724506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.724655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.724819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.724964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.725138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.725348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.725493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.725625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.725779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.725912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.725937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.726080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.726114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.726243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.726271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.726376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.726402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.726514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.726539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:42.914 [2024-07-24 09:19:20.726646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.726672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:33:42.914 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:42.914 [2024-07-24 09:19:20.726827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.726861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:42.914 [2024-07-24 09:19:20.726979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.914 [2024-07-24 09:19:20.727134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.727282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.727429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.727562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.727718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.727905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.727930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.728049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.728074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.914 [2024-07-24 09:19:20.728203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.914 [2024-07-24 09:19:20.728230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.914 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.728349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.728375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.728492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.728518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.728634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.728660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.728798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.728824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.728979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.729149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.729291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.729429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.729587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.729728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.729884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.729912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.730944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.730970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.731935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.731960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.732120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.732260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.732397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.732565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.732703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.732875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.732982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.733007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.733154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.733183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.733297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.733323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.733479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.733519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.733632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.733658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.733767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.915 [2024-07-24 09:19:20.733792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.915 qpair failed and we were unable to recover it. 00:33:42.915 [2024-07-24 09:19:20.733906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.733938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.734956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.734990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.735113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.735146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.735257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.735283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.735419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.735444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.735589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.735615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.735752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.735779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.735892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.735918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.736077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.736224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.736394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.736539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.736681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.736882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.736990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.737874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.737981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.738121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.738271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.738420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.738565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.738704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.738841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.738868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.739015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.739054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.739190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.739219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.739342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.739367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.916 [2024-07-24 09:19:20.739485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.916 [2024-07-24 09:19:20.739511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.916 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.739632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.739658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.739765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.739789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.739907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.739943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.740090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.740130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.740273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.740299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.740419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.740444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.740580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.740606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.740732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.740760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.740880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.740907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.741950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.741976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.742943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.742972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.743077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.743115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.743237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.743262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:42.917 [2024-07-24 09:19:20.743394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.743428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:42.917 [2024-07-24 09:19:20.743550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.743578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.917 [2024-07-24 09:19:20.743698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.917 [2024-07-24 09:19:20.743724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.743836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.743862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.743970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.917 [2024-07-24 09:19:20.743995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.917 qpair failed and we were unable to recover it. 00:33:42.917 [2024-07-24 09:19:20.744100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.744133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.744272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.744298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.744414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.744441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.744557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.744582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.744704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.744742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.744855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.744882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.745880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.745905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.746927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.746952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.747059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.747084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.747211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.747236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.747348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.747373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.747506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.747531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.747668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.747693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.747818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.747846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.748007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.748046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.748213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.748240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.748402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.748428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.748554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.748580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.748695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.748720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.748868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.748894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.749037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.749062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.749218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.749244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.749399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.749424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.749559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.749584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.749721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.749746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.918 [2024-07-24 09:19:20.749854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.918 [2024-07-24 09:19:20.749879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.918 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.749994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.750148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.750312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.750458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.750624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.750775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.750916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.750941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.751078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.751249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.751385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.751543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.751708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.751849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.751992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.752018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.752155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.752194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.752353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.752392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.752525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.752552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.752671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.752696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.752866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.752892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.753060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.753215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.753388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.753535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.753698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.753846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.753984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.754160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.754325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.754472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.754656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.754790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.754934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.754963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.755106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.755131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.755268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.755293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.755408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.755433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.755570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.755594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.755733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.755759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.919 [2024-07-24 09:19:20.755866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.919 [2024-07-24 09:19:20.755892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.919 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.756026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.756260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.756410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.756555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.756718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.756859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.756978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.757120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.757262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.757420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.757554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.757713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.757848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.757873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.758927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.758952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.759923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.759948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.760933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.760958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.761075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.761106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.761219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.761244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.761378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.920 [2024-07-24 09:19:20.761408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.920 qpair failed and we were unable to recover it. 00:33:42.920 [2024-07-24 09:19:20.761516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.761541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.761677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.761702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.761807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.761831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.761940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.761965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.762078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.762109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.762254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.762280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.762441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.762474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.762589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.762614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.762721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.762746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.762884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.762909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.763045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.763229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.763393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.763534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.763693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.763867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.763977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.764132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.764277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.764420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.764556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.764691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.764819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.764844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.765909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.765948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.766083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.766132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 Malloc0 00:33:42.921 [2024-07-24 09:19:20.766264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.766293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.766429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.766455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.766565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.766591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.921 [2024-07-24 09:19:20.766695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 [2024-07-24 09:19:20.766720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.766831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:33:42.921 [2024-07-24 09:19:20.766857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.921 qpair failed and we were unable to recover it. 00:33:42.921 [2024-07-24 09:19:20.766984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.921 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.922 [2024-07-24 09:19:20.767016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.922 [2024-07-24 09:19:20.767157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.767183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.767325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.767351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.767472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.767498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.767604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.767629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.767752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.767777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.767914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.767941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.768132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.768271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.768411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.768553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.768692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.768827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.768962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.769926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.769980] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:42.922 [2024-07-24 09:19:20.770036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.770060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.770184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.770210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.770328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.770354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.770499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.770524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.770633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.770658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.770798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.770823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.770990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.771015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.771150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.771176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.771289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.771315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.771431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.771456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.771621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.922 [2024-07-24 09:19:20.771646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.922 qpair failed and we were unable to recover it. 00:33:42.922 [2024-07-24 09:19:20.771753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.771778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.771896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.771922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.772929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.772963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.773147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.773291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.773431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.773576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.773735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.773871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.773984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.774152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.774312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.774458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.774616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.774756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.774897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.774922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.775961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.775985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.776100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.776132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.776268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.776293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.776410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.776435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.776566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.776591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.776754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.776779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.776884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.776909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.777040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.777069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.777207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.777233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.923 [2024-07-24 09:19:20.777348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.923 [2024-07-24 09:19:20.777373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.923 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.777488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.777513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.777623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.777648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.777766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.777791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.777929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.777954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.778058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.778204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.924 [2024-07-24 09:19:20.778332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:42.924 [2024-07-24 09:19:20.778525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.924 [2024-07-24 09:19:20.778662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.924 [2024-07-24 09:19:20.778792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.778926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.778950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.779969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.779994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.780137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.780163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.780269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.780294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.780415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.780439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.780547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.780572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.780710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.780735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.780892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.780917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.781898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.781923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.782061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.782085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.782248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.782286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.782437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.782478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.782609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.782635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.924 qpair failed and we were unable to recover it. 00:33:42.924 [2024-07-24 09:19:20.782745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.924 [2024-07-24 09:19:20.782770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.782883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.782911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.783878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.783902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.784917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.784946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.785109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.785247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.785406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.785536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.785708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.785852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.785986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.786010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.786123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.786149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.786256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.925 [2024-07-24 09:19:20.786282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.786388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.786413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:42.925 [2024-07-24 09:19:20.786521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.786546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.925 [2024-07-24 09:19:20.786654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.786680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.786789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.925 [2024-07-24 09:19:20.786814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.786951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.786977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.787093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.787257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.787282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.787393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.787418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.787568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.787593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.787699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.787724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.787857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.925 [2024-07-24 09:19:20.787882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.925 qpair failed and we were unable to recover it. 00:33:42.925 [2024-07-24 09:19:20.788028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.788200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.788340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.788476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.788609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.788747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.788919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.788945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.789131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.789278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.789420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.789556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.789691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.789850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.789980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.790146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.790280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.790422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.790607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.790782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7418000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.790937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.790964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.791939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.791966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.792111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.792139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.792259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.792285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.792393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.792420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.792542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.792573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.792689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.792715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.792853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.792879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.793014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.793053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.793215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.793244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.793356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.793381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.926 qpair failed and we were unable to recover it. 00:33:42.926 [2024-07-24 09:19:20.793493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.926 [2024-07-24 09:19:20.793518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.793626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.793651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.793791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.793816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.793924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.793949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.794100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.794131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.927 [2024-07-24 09:19:20.794264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.794289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.794429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:42.927 [2024-07-24 09:19:20.794453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.794580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.927 [2024-07-24 09:19:20.794607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.794722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.794748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 wit 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.927 h addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.794879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.794905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.795843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.795976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.796116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.796263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.796428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.796560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.796719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.796872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.796897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7428000b90 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.797948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.797973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7420000b90 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.798084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:42.927 [2024-07-24 09:19:20.798117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12774b0 with addr=10.0.0.2, port=4420 00:33:42.927 qpair failed and we were unable to recover it. 00:33:42.927 [2024-07-24 09:19:20.798207] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:42.928 [2024-07-24 09:19:20.800633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.800765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.800798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.800814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.800828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.800860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.928 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:42.928 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:42.928 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:42.928 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:42.928 09:19:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3925040 00:33:42.928 [2024-07-24 09:19:20.810536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.810655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.810680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.810695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.810708] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.810737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.820553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.820667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.820693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.820708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.820721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.820750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.830503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.830620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.830645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.830660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.830673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.830708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.840569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.840737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.840763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.840777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.840790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.840818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.850596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.850705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.850731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.850746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.850759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.850787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.860600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.860711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.860737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.860751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.860764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.860793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.870637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.870756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.870781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.870796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.870809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.870838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.880721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.880872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.880902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.880918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.880931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.880959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.890717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.890882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.890907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.890922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.890935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.890963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.900714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.900829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.900854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.900869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.900882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.900910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.910726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.910839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.910864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.928 [2024-07-24 09:19:20.910879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.928 [2024-07-24 09:19:20.910892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.928 [2024-07-24 09:19:20.910919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.928 qpair failed and we were unable to recover it. 00:33:42.928 [2024-07-24 09:19:20.920776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.928 [2024-07-24 09:19:20.920889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.928 [2024-07-24 09:19:20.920915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.920930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.920943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.920977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.930832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.930953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.930978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.930993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.931006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.931033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.940840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.940946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.940972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.940986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.941000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.941027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.950888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.951033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.951058] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.951073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.951086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.951121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.960931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.961049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.961074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.961088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.961108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.961139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.970951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.971083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.971120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.971136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.971149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.971178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.980963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.981078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.981109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.981126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.981140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.981169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:20.990986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:20.991118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:20.991144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:20.991161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:20.991175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:20.991204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:42.929 [2024-07-24 09:19:21.001022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:42.929 [2024-07-24 09:19:21.001158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:42.929 [2024-07-24 09:19:21.001184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:42.929 [2024-07-24 09:19:21.001198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:42.929 [2024-07-24 09:19:21.001212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:42.929 [2024-07-24 09:19:21.001240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:42.929 qpair failed and we were unable to recover it. 00:33:43.189 [2024-07-24 09:19:21.011054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.189 [2024-07-24 09:19:21.011179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.189 [2024-07-24 09:19:21.011205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.189 [2024-07-24 09:19:21.011220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.189 [2024-07-24 09:19:21.011239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.189 [2024-07-24 09:19:21.011270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.189 qpair failed and we were unable to recover it. 00:33:43.189 [2024-07-24 09:19:21.021123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.189 [2024-07-24 09:19:21.021252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.189 [2024-07-24 09:19:21.021279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.189 [2024-07-24 09:19:21.021294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.189 [2024-07-24 09:19:21.021311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.189 [2024-07-24 09:19:21.021342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.189 qpair failed and we were unable to recover it. 00:33:43.189 [2024-07-24 09:19:21.031135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.189 [2024-07-24 09:19:21.031293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.189 [2024-07-24 09:19:21.031319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.189 [2024-07-24 09:19:21.031333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.189 [2024-07-24 09:19:21.031346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.189 [2024-07-24 09:19:21.031376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.189 qpair failed and we were unable to recover it. 00:33:43.189 [2024-07-24 09:19:21.041139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.041254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.041280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.041294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.041308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.041336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.051185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.051302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.051327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.051341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.051354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.051383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.061216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.061350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.061376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.061390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.061403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.061431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.071269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.071396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.071421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.071436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.071449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.071477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.081245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.081374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.081399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.081414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.081427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.081456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.091270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.091377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.091403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.091417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.091430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.091459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.101310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.101421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.101446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.101461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.101479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.101508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.111324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.111489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.111513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.111528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.111541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.111570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.121326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.121434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.121460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.121475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.121488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.121516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.131373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.131504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.131529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.131543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.131557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.131585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.141382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.141513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.141547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.141561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.141575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.141605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.151413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.151536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.151562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.151577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.151590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.151617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.161444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.161565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.161591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.190 [2024-07-24 09:19:21.161605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.190 [2024-07-24 09:19:21.161618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.190 [2024-07-24 09:19:21.161647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.190 qpair failed and we were unable to recover it. 00:33:43.190 [2024-07-24 09:19:21.171499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.190 [2024-07-24 09:19:21.171611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.190 [2024-07-24 09:19:21.171636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.171651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.171665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.171693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.181526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.181644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.181669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.181684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.181697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.181726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.191532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.191669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.191694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.191714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.191728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.191757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.201585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.201696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.201721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.201736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.201749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.201778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.211613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.211749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.211775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.211795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.211811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.211840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.221659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.221775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.221800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.221815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.221828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.221857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.231667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.231784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.231809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.231824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.231837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.231866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.241661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.241775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.241801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.241815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.241828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.241857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.251725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.251859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.251885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.251900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.251913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.251943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.261749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.261880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.261905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.261920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.261933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.261962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.271751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.271879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.271904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.271919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.271932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.271962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.281781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.281890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.281915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.281935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.281950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.281979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.291819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.291934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.291959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.291973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.291986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.191 [2024-07-24 09:19:21.292015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.191 qpair failed and we were unable to recover it. 00:33:43.191 [2024-07-24 09:19:21.301873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.191 [2024-07-24 09:19:21.302042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.191 [2024-07-24 09:19:21.302067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.191 [2024-07-24 09:19:21.302082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.191 [2024-07-24 09:19:21.302095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.192 [2024-07-24 09:19:21.302133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.192 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.311875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.311989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.312014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.312028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.312042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.312072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.321915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.322024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.322049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.322064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.322077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.322113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.331919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.332034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.332059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.332074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.332087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.332128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.341976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.342096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.342129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.342144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.342157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.342186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.351985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.352106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.352140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.352157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.352169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.352200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.362052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.362185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.362211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.362225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.362239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.362267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.372046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.372161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.372186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.372205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.372219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.372248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.382073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.382195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.382221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.382235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.382249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.382277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.392125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.392244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.392269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.392283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.451 [2024-07-24 09:19:21.392296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.451 [2024-07-24 09:19:21.392326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.451 qpair failed and we were unable to recover it. 00:33:43.451 [2024-07-24 09:19:21.402158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.451 [2024-07-24 09:19:21.402279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.451 [2024-07-24 09:19:21.402304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.451 [2024-07-24 09:19:21.402319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.402331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.402359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.412215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.412338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.412363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.412378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.412391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.412421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.422205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.422319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.422345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.422359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.422372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.422401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.432257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.432372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.432396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.432410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.432424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.432452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.442317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.442437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.442461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.442475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.442487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.442515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.452282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.452389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.452415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.452431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.452443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.452471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.462309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.462420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.462449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.462465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.462479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.462507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.472380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.472497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.472522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.472536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.472549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.472578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.482376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.482493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.482518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.482533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.482546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.482576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.492392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.492502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.492528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.492543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.492558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.492587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.502419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.502530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.502555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.502569] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.502583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.502613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.512471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.512587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.512612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.512626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.512639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.512667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.522486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.522625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.522650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.522665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.522678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.522706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.452 qpair failed and we were unable to recover it. 00:33:43.452 [2024-07-24 09:19:21.532543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.452 [2024-07-24 09:19:21.532706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.452 [2024-07-24 09:19:21.532731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.452 [2024-07-24 09:19:21.532745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.452 [2024-07-24 09:19:21.532759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.452 [2024-07-24 09:19:21.532787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-24 09:19:21.542653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.453 [2024-07-24 09:19:21.542762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.453 [2024-07-24 09:19:21.542786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.453 [2024-07-24 09:19:21.542800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.453 [2024-07-24 09:19:21.542813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.453 [2024-07-24 09:19:21.542842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-24 09:19:21.552575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.453 [2024-07-24 09:19:21.552687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.453 [2024-07-24 09:19:21.552718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.453 [2024-07-24 09:19:21.552733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.453 [2024-07-24 09:19:21.552746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.453 [2024-07-24 09:19:21.552774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.453 [2024-07-24 09:19:21.562618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.453 [2024-07-24 09:19:21.562733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.453 [2024-07-24 09:19:21.562758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.453 [2024-07-24 09:19:21.562772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.453 [2024-07-24 09:19:21.562785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.453 [2024-07-24 09:19:21.562813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.453 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.572669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.572787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.572813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.572827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.572840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.572869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.582683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.582809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.582834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.582848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.582862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.582889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.592779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.592890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.592915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.592929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.592942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.592976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.602737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.602857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.602883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.602897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.602910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.602938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.612911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.613041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.613067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.613082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.613095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.613132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.622834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.622943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.622969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.622983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.622996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.623024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.632884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.633003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.633028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.633043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.633056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.633084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.642878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.642988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.643019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.643034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.643048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.643076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.652873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.652986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.653012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.653029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.653043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.653072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.662884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.662993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.663018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.712 [2024-07-24 09:19:21.663033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.712 [2024-07-24 09:19:21.663046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.712 [2024-07-24 09:19:21.663074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.712 qpair failed and we were unable to recover it. 00:33:43.712 [2024-07-24 09:19:21.672953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.712 [2024-07-24 09:19:21.673115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.712 [2024-07-24 09:19:21.673150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.673164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.673180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.673208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.683006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.683125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.683151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.683165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.683178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.683213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.692974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.693089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.693125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.693140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.693153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.693182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.703003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.703118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.703144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.703159] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.703172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.703200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.713037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.713153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.713178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.713193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.713206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.713236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.723069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.723191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.723216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.723231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.723244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.723272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.733093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.733216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.733246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.733261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.733274] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.733302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.743146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.743255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.743280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.743294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.743308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.743336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.753152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.753271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.753296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.753310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.753324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.753352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.763189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.763309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.763334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.763348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.763362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.763390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.773262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.773390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.773415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.773429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.773448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.773477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.783321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.783436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.783461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.783476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.783490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.783519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.793282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.793394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.793419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.793433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.713 [2024-07-24 09:19:21.793446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.713 [2024-07-24 09:19:21.793475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.713 qpair failed and we were unable to recover it. 00:33:43.713 [2024-07-24 09:19:21.803316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.713 [2024-07-24 09:19:21.803434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.713 [2024-07-24 09:19:21.803459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.713 [2024-07-24 09:19:21.803473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.714 [2024-07-24 09:19:21.803486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.714 [2024-07-24 09:19:21.803515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.714 qpair failed and we were unable to recover it. 00:33:43.714 [2024-07-24 09:19:21.813328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.714 [2024-07-24 09:19:21.813433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.714 [2024-07-24 09:19:21.813458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.714 [2024-07-24 09:19:21.813473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.714 [2024-07-24 09:19:21.813486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.714 [2024-07-24 09:19:21.813514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.714 qpair failed and we were unable to recover it. 00:33:43.714 [2024-07-24 09:19:21.823341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.714 [2024-07-24 09:19:21.823516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.714 [2024-07-24 09:19:21.823541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.714 [2024-07-24 09:19:21.823556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.714 [2024-07-24 09:19:21.823569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.714 [2024-07-24 09:19:21.823597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.714 qpair failed and we were unable to recover it. 00:33:43.973 [2024-07-24 09:19:21.833394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.973 [2024-07-24 09:19:21.833525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.973 [2024-07-24 09:19:21.833550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.973 [2024-07-24 09:19:21.833565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.973 [2024-07-24 09:19:21.833578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.973 [2024-07-24 09:19:21.833606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.973 qpair failed and we were unable to recover it. 00:33:43.973 [2024-07-24 09:19:21.843413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.973 [2024-07-24 09:19:21.843533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.973 [2024-07-24 09:19:21.843558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.973 [2024-07-24 09:19:21.843572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.973 [2024-07-24 09:19:21.843586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.973 [2024-07-24 09:19:21.843614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.973 qpair failed and we were unable to recover it. 00:33:43.973 [2024-07-24 09:19:21.853445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.973 [2024-07-24 09:19:21.853556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.973 [2024-07-24 09:19:21.853582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.973 [2024-07-24 09:19:21.853596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.973 [2024-07-24 09:19:21.853610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.973 [2024-07-24 09:19:21.853638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.973 qpair failed and we were unable to recover it. 00:33:43.973 [2024-07-24 09:19:21.863506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.973 [2024-07-24 09:19:21.863615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.973 [2024-07-24 09:19:21.863640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.973 [2024-07-24 09:19:21.863654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.973 [2024-07-24 09:19:21.863673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.973 [2024-07-24 09:19:21.863702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.973 qpair failed and we were unable to recover it. 00:33:43.973 [2024-07-24 09:19:21.873511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.973 [2024-07-24 09:19:21.873627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.973 [2024-07-24 09:19:21.873652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.973 [2024-07-24 09:19:21.873666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.973 [2024-07-24 09:19:21.873679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.973 [2024-07-24 09:19:21.873709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.973 qpair failed and we were unable to recover it. 00:33:43.973 [2024-07-24 09:19:21.883554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.973 [2024-07-24 09:19:21.883675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.973 [2024-07-24 09:19:21.883699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.973 [2024-07-24 09:19:21.883714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.973 [2024-07-24 09:19:21.883727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.973 [2024-07-24 09:19:21.883756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.973 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.893580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.893700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.893725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.893739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.893752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.893780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.903632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.903787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.903812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.903827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.903840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.903868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.913634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.913758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.913783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.913798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.913811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.913840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.923624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.923789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.923815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.923832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.923847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.923876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.933653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.933765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.933791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.933805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.933819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.933847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.943676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.943785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.943810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.943824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.943838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.943866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.953803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.953924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.953948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.953962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.953981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.954010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.963739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.963852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.963877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.963891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.963904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.963933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.973770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.973879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.973903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.973918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.973931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.973959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.983807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.983938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.983963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.983978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.983991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.984019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:21.993831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:21.993965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:21.993990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:21.994004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:21.994017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:21.994046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:22.003882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:22.003995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:22.004020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:22.004034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:22.004047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:22.004075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:22.013889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.974 [2024-07-24 09:19:22.014011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.974 [2024-07-24 09:19:22.014036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.974 [2024-07-24 09:19:22.014051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.974 [2024-07-24 09:19:22.014064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.974 [2024-07-24 09:19:22.014092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.974 qpair failed and we were unable to recover it. 00:33:43.974 [2024-07-24 09:19:22.023950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.024079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.024112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.024129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.024142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.024171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:43.975 [2024-07-24 09:19:22.033942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.034058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.034083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.034099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.034124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.034154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:43.975 [2024-07-24 09:19:22.043989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.044110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.044135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.044156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.044170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.044199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:43.975 [2024-07-24 09:19:22.054023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.054163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.054188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.054203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.054215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.054245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:43.975 [2024-07-24 09:19:22.064033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.064159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.064184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.064198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.064211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.064240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:43.975 [2024-07-24 09:19:22.074083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.074217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.074241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.074256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.074268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.074297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:43.975 [2024-07-24 09:19:22.084126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:43.975 [2024-07-24 09:19:22.084276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:43.975 [2024-07-24 09:19:22.084301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:43.975 [2024-07-24 09:19:22.084316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:43.975 [2024-07-24 09:19:22.084329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:43.975 [2024-07-24 09:19:22.084358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:43.975 qpair failed and we were unable to recover it. 00:33:44.234 [2024-07-24 09:19:22.094127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.234 [2024-07-24 09:19:22.094238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.234 [2024-07-24 09:19:22.094263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.234 [2024-07-24 09:19:22.094278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.234 [2024-07-24 09:19:22.094291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.234 [2024-07-24 09:19:22.094319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.234 qpair failed and we were unable to recover it. 00:33:44.234 [2024-07-24 09:19:22.104208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.234 [2024-07-24 09:19:22.104336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.234 [2024-07-24 09:19:22.104361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.234 [2024-07-24 09:19:22.104377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.234 [2024-07-24 09:19:22.104393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.234 [2024-07-24 09:19:22.104422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.234 qpair failed and we were unable to recover it. 00:33:44.234 [2024-07-24 09:19:22.114196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.234 [2024-07-24 09:19:22.114318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.234 [2024-07-24 09:19:22.114343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.234 [2024-07-24 09:19:22.114357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.234 [2024-07-24 09:19:22.114371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.234 [2024-07-24 09:19:22.114399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.234 qpair failed and we were unable to recover it. 00:33:44.234 [2024-07-24 09:19:22.124225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.234 [2024-07-24 09:19:22.124342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.234 [2024-07-24 09:19:22.124367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.234 [2024-07-24 09:19:22.124381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.234 [2024-07-24 09:19:22.124395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.234 [2024-07-24 09:19:22.124424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.234 qpair failed and we were unable to recover it. 00:33:44.234 [2024-07-24 09:19:22.134228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.134335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.134360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.134381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.134395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.134425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.144334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.144450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.144475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.144490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.144503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.144531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.154289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.154408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.154433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.154448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.154461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.154489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.164333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.164465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.164491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.164505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.164518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.164546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.174374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.174486] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.174511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.174525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.174538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.174567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.184397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.184510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.184536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.184550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.184563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.184592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.194405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.194521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.194545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.194560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.194573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.194602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.204443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.204579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.204605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.204619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.204632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.204659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.214477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.214597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.214622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.214637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.214650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.214678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.224475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.224602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.224633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.224648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.224661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.224689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.234521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.234631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.234656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.234670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.234683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.234711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.244556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.244678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.244704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.244719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.244732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.244760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.254616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.254727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.254752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.235 [2024-07-24 09:19:22.254767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.235 [2024-07-24 09:19:22.254780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.235 [2024-07-24 09:19:22.254808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.235 qpair failed and we were unable to recover it. 00:33:44.235 [2024-07-24 09:19:22.264582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.235 [2024-07-24 09:19:22.264696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.235 [2024-07-24 09:19:22.264721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.264736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.264749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.264777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.274672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.274832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.274857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.274871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.274884] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.274912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.284673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.284792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.284818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.284835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.284848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.284877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.294658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.294782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.294808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.294822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.294835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.294864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.304695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.304806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.304831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.304845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.304858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.304886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.314725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.314838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.314868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.314883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.314896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.314924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.324763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.324889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.324914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.324929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.324942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.324970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.334789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.334898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.334924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.334939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.334953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.334982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.236 [2024-07-24 09:19:22.344825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.236 [2024-07-24 09:19:22.344956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.236 [2024-07-24 09:19:22.344982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.236 [2024-07-24 09:19:22.344996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.236 [2024-07-24 09:19:22.345010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.236 [2024-07-24 09:19:22.345038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.236 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.354858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.354975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.355000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.355015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.355029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.355064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.364887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.365003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.365028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.365042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.365055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.365083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.374897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.375024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.375049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.375063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.375077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.375113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.384994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.385121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.385147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.385162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.385175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.385205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.394990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.395123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.395148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.395163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.395176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.395206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.405013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.405132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.405167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.405182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.405196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.405224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.415019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.415138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.415172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.415186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.415199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.495 [2024-07-24 09:19:22.415228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.495 qpair failed and we were unable to recover it. 00:33:44.495 [2024-07-24 09:19:22.425083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.495 [2024-07-24 09:19:22.425210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.495 [2024-07-24 09:19:22.425235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.495 [2024-07-24 09:19:22.425249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.495 [2024-07-24 09:19:22.425262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.425291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.435096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.435235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.435260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.435275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.435288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.435318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.445115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.445243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.445267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.445280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.445292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.445325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.455152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.455263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.455288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.455302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.455315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.455343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.465178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.465301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.465326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.465340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.465354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.465382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.475216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.475341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.475366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.475380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.475394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.475422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.485253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.485401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.485426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.485440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.485453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.485481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.495256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.495383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.495413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.495428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.495441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.495470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.505285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.505396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.505421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.505436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.505449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.505477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.515357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.515480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.515505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.515520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.515533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.515563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.525373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.525524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.525549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.525564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.525577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.525605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.535369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.535491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.535516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.535531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.535549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.535578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.545394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.545521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.545546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.545560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.545573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.545601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.555464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.496 [2024-07-24 09:19:22.555607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.496 [2024-07-24 09:19:22.555632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.496 [2024-07-24 09:19:22.555647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.496 [2024-07-24 09:19:22.555660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.496 [2024-07-24 09:19:22.555688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.496 qpair failed and we were unable to recover it. 00:33:44.496 [2024-07-24 09:19:22.565521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.497 [2024-07-24 09:19:22.565634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.497 [2024-07-24 09:19:22.565659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.497 [2024-07-24 09:19:22.565674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.497 [2024-07-24 09:19:22.565687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.497 [2024-07-24 09:19:22.565715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.497 qpair failed and we were unable to recover it. 00:33:44.497 [2024-07-24 09:19:22.575519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.497 [2024-07-24 09:19:22.575635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.497 [2024-07-24 09:19:22.575660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.497 [2024-07-24 09:19:22.575674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.497 [2024-07-24 09:19:22.575688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.497 [2024-07-24 09:19:22.575716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.497 qpair failed and we were unable to recover it. 00:33:44.497 [2024-07-24 09:19:22.585528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.497 [2024-07-24 09:19:22.585646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.497 [2024-07-24 09:19:22.585671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.497 [2024-07-24 09:19:22.585686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.497 [2024-07-24 09:19:22.585699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.497 [2024-07-24 09:19:22.585727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.497 qpair failed and we were unable to recover it. 00:33:44.497 [2024-07-24 09:19:22.595574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.497 [2024-07-24 09:19:22.595691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.497 [2024-07-24 09:19:22.595717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.497 [2024-07-24 09:19:22.595731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.497 [2024-07-24 09:19:22.595744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.497 [2024-07-24 09:19:22.595773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.497 qpair failed and we were unable to recover it. 00:33:44.497 [2024-07-24 09:19:22.605597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.497 [2024-07-24 09:19:22.605713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.497 [2024-07-24 09:19:22.605737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.497 [2024-07-24 09:19:22.605752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.497 [2024-07-24 09:19:22.605765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.497 [2024-07-24 09:19:22.605793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.497 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.615609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.615719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.615744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.615759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.615772] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.615800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.756 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.625619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.625730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.625756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.625770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.625788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.625817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.756 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.635680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.635843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.635868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.635883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.635896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.635925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.756 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.645681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.645840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.645864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.645879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.645892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.645922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.756 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.655745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.655876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.655902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.655916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.655929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.655957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.756 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.665745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.665860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.665885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.665900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.665913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.665941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.756 qpair failed and we were unable to recover it. 00:33:44.756 [2024-07-24 09:19:22.675798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.756 [2024-07-24 09:19:22.675920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.756 [2024-07-24 09:19:22.675945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.756 [2024-07-24 09:19:22.675959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.756 [2024-07-24 09:19:22.675971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.756 [2024-07-24 09:19:22.675999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.685802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.685924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.685950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.685964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.685977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.686006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.695873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.695988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.696013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.696028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.696041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.696069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.705868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.705982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.706008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.706023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.706036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.706065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.715891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.716012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.716037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.716052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.716071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.716100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.725961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.726088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.726123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.726138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.726151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.726180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.735971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.736084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.736116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.736131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.736145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.736175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.745962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.746109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.746135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.746149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.746162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.746190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.756002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.756158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.756184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.756198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.756211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12774b0 00:33:44.757 [2024-07-24 09:19:22.756240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.766047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.766162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.766194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.766210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.766223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.757 [2024-07-24 09:19:22.766255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.776079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.776207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.776235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.776250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.776263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.757 [2024-07-24 09:19:22.776294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.786115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.786237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.786264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.786279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.786293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.757 [2024-07-24 09:19:22.786326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.796147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.796262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.796289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.796304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.796317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.757 [2024-07-24 09:19:22.796347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.806172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.757 [2024-07-24 09:19:22.806295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.757 [2024-07-24 09:19:22.806322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.757 [2024-07-24 09:19:22.806342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.757 [2024-07-24 09:19:22.806356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.757 [2024-07-24 09:19:22.806387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.757 qpair failed and we were unable to recover it. 00:33:44.757 [2024-07-24 09:19:22.816228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.758 [2024-07-24 09:19:22.816339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.758 [2024-07-24 09:19:22.816365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.758 [2024-07-24 09:19:22.816380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.758 [2024-07-24 09:19:22.816394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.758 [2024-07-24 09:19:22.816423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.758 qpair failed and we were unable to recover it. 00:33:44.758 [2024-07-24 09:19:22.826232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.758 [2024-07-24 09:19:22.826385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.758 [2024-07-24 09:19:22.826412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.758 [2024-07-24 09:19:22.826426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.758 [2024-07-24 09:19:22.826440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.758 [2024-07-24 09:19:22.826472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.758 qpair failed and we were unable to recover it. 00:33:44.758 [2024-07-24 09:19:22.836269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.758 [2024-07-24 09:19:22.836389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.758 [2024-07-24 09:19:22.836416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.758 [2024-07-24 09:19:22.836432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.758 [2024-07-24 09:19:22.836445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.758 [2024-07-24 09:19:22.836486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.758 qpair failed and we were unable to recover it. 00:33:44.758 [2024-07-24 09:19:22.846303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.758 [2024-07-24 09:19:22.846433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.758 [2024-07-24 09:19:22.846459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.758 [2024-07-24 09:19:22.846474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.758 [2024-07-24 09:19:22.846487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.758 [2024-07-24 09:19:22.846518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.758 qpair failed and we were unable to recover it. 00:33:44.758 [2024-07-24 09:19:22.856308] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.758 [2024-07-24 09:19:22.856417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.758 [2024-07-24 09:19:22.856444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.758 [2024-07-24 09:19:22.856461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.758 [2024-07-24 09:19:22.856474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.758 [2024-07-24 09:19:22.856505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.758 qpair failed and we were unable to recover it. 00:33:44.758 [2024-07-24 09:19:22.866336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:44.758 [2024-07-24 09:19:22.866444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:44.758 [2024-07-24 09:19:22.866471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:44.758 [2024-07-24 09:19:22.866486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:44.758 [2024-07-24 09:19:22.866499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:44.758 [2024-07-24 09:19:22.866530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:44.758 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.876364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.876479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.876506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.876521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.876535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.876566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.886424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.886534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.886561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.886575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.886589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.886620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.896405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.896525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.896558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.896574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.896588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.896618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.906446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.906600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.906627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.906642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.906655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.906685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.916467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.916585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.916611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.916626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.916639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.916670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.926551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.926665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.926691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.926707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.926720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.926750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.017 [2024-07-24 09:19:22.936516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.017 [2024-07-24 09:19:22.936626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.017 [2024-07-24 09:19:22.936653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.017 [2024-07-24 09:19:22.936668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.017 [2024-07-24 09:19:22.936680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.017 [2024-07-24 09:19:22.936716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.017 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:22.946540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:22.946649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:22.946675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:22.946690] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:22.946703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:22.946733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:22.956584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:22.956711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:22.956737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:22.956751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:22.956765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:22.956795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:22.966638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:22.966806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:22.966833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:22.966847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:22.966861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:22.966892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:22.976664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:22.976814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:22.976840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:22.976855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:22.976869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:22.976899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:22.986693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:22.986805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:22.986837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:22.986852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:22.986866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:22.986896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:22.996713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:22.996833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:22.996860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:22.996874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:22.996891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:22.996920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.006732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.006842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.006867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.006882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.006895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.006928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.016793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.016906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.016932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.016947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.016960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.016989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.026812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.026925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.026951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.026966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.026980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.027016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.036853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.036974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.037000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.037015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.037028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.037058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.046883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.046999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.047025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.047040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.047053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.047084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.056909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.057018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.057045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.057059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.057072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.057109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.066887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.067021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.018 [2024-07-24 09:19:23.067048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.018 [2024-07-24 09:19:23.067062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.018 [2024-07-24 09:19:23.067076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.018 [2024-07-24 09:19:23.067112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.018 qpair failed and we were unable to recover it. 00:33:45.018 [2024-07-24 09:19:23.076942] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.018 [2024-07-24 09:19:23.077056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.019 [2024-07-24 09:19:23.077087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.019 [2024-07-24 09:19:23.077108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.019 [2024-07-24 09:19:23.077123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.019 [2024-07-24 09:19:23.077154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.019 qpair failed and we were unable to recover it. 00:33:45.019 [2024-07-24 09:19:23.086967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.019 [2024-07-24 09:19:23.087086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.019 [2024-07-24 09:19:23.087119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.019 [2024-07-24 09:19:23.087135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.019 [2024-07-24 09:19:23.087148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.019 [2024-07-24 09:19:23.087179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.019 qpair failed and we were unable to recover it. 00:33:45.019 [2024-07-24 09:19:23.096990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.019 [2024-07-24 09:19:23.097097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.019 [2024-07-24 09:19:23.097130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.019 [2024-07-24 09:19:23.097145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.019 [2024-07-24 09:19:23.097158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.019 [2024-07-24 09:19:23.097187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.019 qpair failed and we were unable to recover it. 00:33:45.019 [2024-07-24 09:19:23.107042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.019 [2024-07-24 09:19:23.107183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.019 [2024-07-24 09:19:23.107209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.019 [2024-07-24 09:19:23.107224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.019 [2024-07-24 09:19:23.107238] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.019 [2024-07-24 09:19:23.107268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.019 qpair failed and we were unable to recover it. 00:33:45.019 [2024-07-24 09:19:23.117080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.019 [2024-07-24 09:19:23.117254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.019 [2024-07-24 09:19:23.117281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.019 [2024-07-24 09:19:23.117295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.019 [2024-07-24 09:19:23.117314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.019 [2024-07-24 09:19:23.117346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.019 qpair failed and we were unable to recover it. 00:33:45.019 [2024-07-24 09:19:23.127083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.019 [2024-07-24 09:19:23.127205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.019 [2024-07-24 09:19:23.127232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.019 [2024-07-24 09:19:23.127247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.019 [2024-07-24 09:19:23.127261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.019 [2024-07-24 09:19:23.127292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.019 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.137141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.137276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.137307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.137332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.137349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.137393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.147142] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.147273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.147300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.147318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.147333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.147365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.157174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.157299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.157326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.157341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.157354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.157384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.167195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.167321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.167347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.167362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.167375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.167405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.177248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.177413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.177439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.177454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.177467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.177498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.187239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.187349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.187374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.187389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.187403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.187433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.197287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.197405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.197430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.197445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.197458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.197488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.207300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.207413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.207439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.207459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.207474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.207504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.217319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.217479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.217504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.217520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.217533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.217562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.227379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.227515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.227540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.227555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.227568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.227598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.237391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.237506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.237531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.237547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.237560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.237592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.279 [2024-07-24 09:19:23.247403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.279 [2024-07-24 09:19:23.247509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.279 [2024-07-24 09:19:23.247535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.279 [2024-07-24 09:19:23.247550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.279 [2024-07-24 09:19:23.247564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.279 [2024-07-24 09:19:23.247593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.279 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.257438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.257568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.257594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.257609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.257622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.257651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.267459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.267572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.267598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.267612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.267626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.267656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.277487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.277607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.277633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.277647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.277661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.277690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.287509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.287624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.287650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.287665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.287678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.287708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.297571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.297683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.297709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.297730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.297744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.297774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.307624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.307734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.307760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.307775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.307788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.307818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.317635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.317745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.317771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.317785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.317798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.317828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.327655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.327778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.327805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.327819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.327833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.327862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.337710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.337824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.337850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.337865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.337878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.337911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.347721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.347888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.347914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.347932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.347947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.347978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.357736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.357860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.357886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.357901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.357915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.357944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.367739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.367849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.367874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.367889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.367902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.367933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.377791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.377929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.280 [2024-07-24 09:19:23.377957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.280 [2024-07-24 09:19:23.377972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.280 [2024-07-24 09:19:23.377985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.280 [2024-07-24 09:19:23.378015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.280 qpair failed and we were unable to recover it. 00:33:45.280 [2024-07-24 09:19:23.387880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.280 [2024-07-24 09:19:23.388011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.281 [2024-07-24 09:19:23.388042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.281 [2024-07-24 09:19:23.388058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.281 [2024-07-24 09:19:23.388071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.281 [2024-07-24 09:19:23.388109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.281 qpair failed and we were unable to recover it. 00:33:45.538 [2024-07-24 09:19:23.397868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.538 [2024-07-24 09:19:23.397986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.538 [2024-07-24 09:19:23.398012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.538 [2024-07-24 09:19:23.398028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.538 [2024-07-24 09:19:23.398054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.538 [2024-07-24 09:19:23.398092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.538 qpair failed and we were unable to recover it. 00:33:45.538 [2024-07-24 09:19:23.407879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.538 [2024-07-24 09:19:23.408001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.538 [2024-07-24 09:19:23.408028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.538 [2024-07-24 09:19:23.408043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.538 [2024-07-24 09:19:23.408056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.538 [2024-07-24 09:19:23.408086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.538 qpair failed and we were unable to recover it. 00:33:45.538 [2024-07-24 09:19:23.417895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.538 [2024-07-24 09:19:23.418008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.538 [2024-07-24 09:19:23.418035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.538 [2024-07-24 09:19:23.418050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.538 [2024-07-24 09:19:23.418063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.538 [2024-07-24 09:19:23.418094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.538 qpair failed and we were unable to recover it. 00:33:45.538 [2024-07-24 09:19:23.427961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.538 [2024-07-24 09:19:23.428090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.538 [2024-07-24 09:19:23.428124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.538 [2024-07-24 09:19:23.428140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.538 [2024-07-24 09:19:23.428153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.538 [2024-07-24 09:19:23.428189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.538 qpair failed and we were unable to recover it. 00:33:45.538 [2024-07-24 09:19:23.437964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.538 [2024-07-24 09:19:23.438086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.538 [2024-07-24 09:19:23.438121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.538 [2024-07-24 09:19:23.438137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.538 [2024-07-24 09:19:23.438150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.438181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.447968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.448078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.448108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.448125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.448138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.448167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.457998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.458119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.458146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.458160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.458174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.458204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.468030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.468153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.468180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.468195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.468208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.468238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.478061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.478197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.478228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.478244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.478257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.478287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.488094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.488226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.488253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.488268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.488281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.488311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.498154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.498276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.498302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.498319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.498332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.498364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.508169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.508286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.508312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.508327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.508341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.508373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.518220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.518335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.518361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.518376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.518395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.518440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.528241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.528368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.528394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.528408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.528422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.528452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.538254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.538361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.538387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.538402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.538415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.538444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.548307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.548458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.548484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.548499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.548512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.548543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.558302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.558419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.539 [2024-07-24 09:19:23.558445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.539 [2024-07-24 09:19:23.558460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.539 [2024-07-24 09:19:23.558473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.539 [2024-07-24 09:19:23.558506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.539 qpair failed and we were unable to recover it. 00:33:45.539 [2024-07-24 09:19:23.568355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.539 [2024-07-24 09:19:23.568492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.568518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.568533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.568547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.568576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.578394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.578509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.578538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.578553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.578570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.578602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.588414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.588530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.588555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.588571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.588584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.588613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.598429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.598549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.598575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.598590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.598602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.598633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.608436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.608574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.608600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.608621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.608635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.608665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.618467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.618578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.618605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.618619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.618633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.618663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.628523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.628644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.628671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.628686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.628699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.628730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.638585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.638710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.638737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.638751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.638765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.638795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.540 [2024-07-24 09:19:23.648568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.540 [2024-07-24 09:19:23.648688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.540 [2024-07-24 09:19:23.648714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.540 [2024-07-24 09:19:23.648729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.540 [2024-07-24 09:19:23.648743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.540 [2024-07-24 09:19:23.648772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.540 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.658597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.658725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.658752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.658767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.658780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.658811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.668608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.668721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.668748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.668762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.668776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.668806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.678698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.678819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.678846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.678861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.678874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.678904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.688697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.688817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.688843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.688858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.688872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.688913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.698716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.698825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.698852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.698872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.698886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.698917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.708719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.708834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.708861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.708876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.708889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.708919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.718796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.718916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.718941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.718956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.718969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.718999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.728779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.799 [2024-07-24 09:19:23.728889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.799 [2024-07-24 09:19:23.728915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.799 [2024-07-24 09:19:23.728930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.799 [2024-07-24 09:19:23.728943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.799 [2024-07-24 09:19:23.728975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.799 qpair failed and we were unable to recover it. 00:33:45.799 [2024-07-24 09:19:23.738801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.738909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.738935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.738950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.738963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.738994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.748862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.748982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.749009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.749024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.749038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.749080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.758908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.759028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.759054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.759069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.759082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.759122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.768879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.768991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.769017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.769032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.769045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.769074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.778914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.779022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.779048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.779063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.779076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.779115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.788939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.789051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.789082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.789097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.789119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.789149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.799021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.799139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.799165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.799180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.799194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.799224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.809066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.809234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.809261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.809276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.809289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.809321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.819037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.819156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.819182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.819197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.819210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.819240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.829093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.829257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.829283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.829298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.829312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.829348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.839123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.839283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.839309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.839324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.839338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.839368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.849153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.849272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.849297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.849313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.800 [2024-07-24 09:19:23.849326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.800 [2024-07-24 09:19:23.849356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.800 qpair failed and we were unable to recover it. 00:33:45.800 [2024-07-24 09:19:23.859295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.800 [2024-07-24 09:19:23.859427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.800 [2024-07-24 09:19:23.859453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.800 [2024-07-24 09:19:23.859468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.801 [2024-07-24 09:19:23.859481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.801 [2024-07-24 09:19:23.859512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.801 qpair failed and we were unable to recover it. 00:33:45.801 [2024-07-24 09:19:23.869266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.801 [2024-07-24 09:19:23.869426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.801 [2024-07-24 09:19:23.869452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.801 [2024-07-24 09:19:23.869468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.801 [2024-07-24 09:19:23.869481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.801 [2024-07-24 09:19:23.869511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.801 qpair failed and we were unable to recover it. 00:33:45.801 [2024-07-24 09:19:23.879250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.801 [2024-07-24 09:19:23.879368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.801 [2024-07-24 09:19:23.879408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.801 [2024-07-24 09:19:23.879423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.801 [2024-07-24 09:19:23.879437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.801 [2024-07-24 09:19:23.879467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.801 qpair failed and we were unable to recover it. 00:33:45.801 [2024-07-24 09:19:23.889271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.801 [2024-07-24 09:19:23.889382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.801 [2024-07-24 09:19:23.889408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.801 [2024-07-24 09:19:23.889423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.801 [2024-07-24 09:19:23.889437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.801 [2024-07-24 09:19:23.889468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.801 qpair failed and we were unable to recover it. 00:33:45.801 [2024-07-24 09:19:23.899311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.801 [2024-07-24 09:19:23.899422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.801 [2024-07-24 09:19:23.899448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.801 [2024-07-24 09:19:23.899463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.801 [2024-07-24 09:19:23.899476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.801 [2024-07-24 09:19:23.899519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.801 qpair failed and we were unable to recover it. 00:33:45.801 [2024-07-24 09:19:23.909321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:45.801 [2024-07-24 09:19:23.909429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:45.801 [2024-07-24 09:19:23.909455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:45.801 [2024-07-24 09:19:23.909470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:45.801 [2024-07-24 09:19:23.909483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:45.801 [2024-07-24 09:19:23.909524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:45.801 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.919335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.919450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.919476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.919491] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.919510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.060 [2024-07-24 09:19:23.919542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.060 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.929357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.929468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.929494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.929509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.929522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.060 [2024-07-24 09:19:23.929556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.060 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.939402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.939520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.939546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.939560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.939574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.060 [2024-07-24 09:19:23.939603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.060 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.949438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.949552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.949578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.949593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.949606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.060 [2024-07-24 09:19:23.949638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.060 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.959479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.959598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.959623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.959638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.959652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.060 [2024-07-24 09:19:23.959682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.060 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.969462] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.969583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.969609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.969625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.969638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.060 [2024-07-24 09:19:23.969668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.060 qpair failed and we were unable to recover it. 00:33:46.060 [2024-07-24 09:19:23.979486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.060 [2024-07-24 09:19:23.979600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.060 [2024-07-24 09:19:23.979627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.060 [2024-07-24 09:19:23.979642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.060 [2024-07-24 09:19:23.979655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:23.979686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:23.989582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:23.989699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:23.989726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:23.989741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:23.989755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:23.989784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:23.999579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:23.999704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:23.999730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:23.999747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:23.999761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:23.999791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.009621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.009734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.009760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.009775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.009794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.009826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.019625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.019742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.019769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.019783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.019809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.019841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.029623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.029777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.029803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.029818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.029831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.029861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.039665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.039781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.039807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.039822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.039835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.039864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.049694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.049806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.049832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.049847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.049860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.049901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.059710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.059856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.059882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.059896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.059909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.059941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.069737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.069844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.069871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.069886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.069899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.069929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.079808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.079926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.079952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.079967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.079980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.080010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.089794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.089905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.089932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.089947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.089961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.089991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.099919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.100049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.100076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.100096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.100125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.100161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.061 [2024-07-24 09:19:24.109849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.061 [2024-07-24 09:19:24.109961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.061 [2024-07-24 09:19:24.109987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.061 [2024-07-24 09:19:24.110002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.061 [2024-07-24 09:19:24.110015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.061 [2024-07-24 09:19:24.110045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.061 qpair failed and we were unable to recover it. 00:33:46.062 [2024-07-24 09:19:24.119910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.062 [2024-07-24 09:19:24.120026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.062 [2024-07-24 09:19:24.120052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.062 [2024-07-24 09:19:24.120067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.062 [2024-07-24 09:19:24.120080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.062 [2024-07-24 09:19:24.120119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.062 qpair failed and we were unable to recover it. 00:33:46.062 [2024-07-24 09:19:24.129923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.062 [2024-07-24 09:19:24.130044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.062 [2024-07-24 09:19:24.130070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.062 [2024-07-24 09:19:24.130085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.062 [2024-07-24 09:19:24.130098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.062 [2024-07-24 09:19:24.130137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.062 qpair failed and we were unable to recover it. 00:33:46.062 [2024-07-24 09:19:24.139974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.062 [2024-07-24 09:19:24.140095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.062 [2024-07-24 09:19:24.140133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.062 [2024-07-24 09:19:24.140150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.062 [2024-07-24 09:19:24.140163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.062 [2024-07-24 09:19:24.140193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.062 qpair failed and we were unable to recover it. 00:33:46.062 [2024-07-24 09:19:24.149984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.062 [2024-07-24 09:19:24.150093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.062 [2024-07-24 09:19:24.150129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.062 [2024-07-24 09:19:24.150144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.062 [2024-07-24 09:19:24.150158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.062 [2024-07-24 09:19:24.150187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.062 qpair failed and we were unable to recover it. 00:33:46.062 [2024-07-24 09:19:24.160004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.062 [2024-07-24 09:19:24.160130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.062 [2024-07-24 09:19:24.160165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.062 [2024-07-24 09:19:24.160181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.062 [2024-07-24 09:19:24.160194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.062 [2024-07-24 09:19:24.160225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.062 qpair failed and we were unable to recover it. 00:33:46.062 [2024-07-24 09:19:24.170039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.062 [2024-07-24 09:19:24.170162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.062 [2024-07-24 09:19:24.170189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.062 [2024-07-24 09:19:24.170204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.062 [2024-07-24 09:19:24.170217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.062 [2024-07-24 09:19:24.170247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.062 qpair failed and we were unable to recover it. 00:33:46.321 [2024-07-24 09:19:24.180079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.321 [2024-07-24 09:19:24.180202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.321 [2024-07-24 09:19:24.180229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.321 [2024-07-24 09:19:24.180245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.321 [2024-07-24 09:19:24.180258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.321 [2024-07-24 09:19:24.180301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.321 qpair failed and we were unable to recover it. 00:33:46.321 [2024-07-24 09:19:24.190124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.321 [2024-07-24 09:19:24.190234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.190265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.190281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.190294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.190324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.200155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.200269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.200295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.200310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.200323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.200353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.210186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.210305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.210331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.210346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.210359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.210401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.220194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.220318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.220344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.220359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.220372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.220404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.230319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.230441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.230467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.230482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.230495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.230531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.240300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.240417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.240443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.240457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.240471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.240500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.250306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.250429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.250456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.250473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.250487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.250516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.260385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.260501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.260527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.260542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.260555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.260585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.270334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.270447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.270473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.270487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.270501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.270530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.280403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.280526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.280559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.280575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.280589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.280618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.290393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.290569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.290594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.290608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.290622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.290651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.300391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.300509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.300534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.300549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.300562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.300592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.310424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.322 [2024-07-24 09:19:24.310539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.322 [2024-07-24 09:19:24.310565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.322 [2024-07-24 09:19:24.310580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.322 [2024-07-24 09:19:24.310593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.322 [2024-07-24 09:19:24.310623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.322 qpair failed and we were unable to recover it. 00:33:46.322 [2024-07-24 09:19:24.320472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.320639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.320665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.320680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.320693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.320729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.330480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.330599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.330625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.330640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.330654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.330683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.340540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.340679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.340705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.340720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.340733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.340763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.350531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.350642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.350668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.350683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.350696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.350726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.360555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.360697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.360722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.360737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.360750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.360780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.370619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.370780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.370808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.370823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.370836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.370867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.380638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.380743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.380769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.380783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.380796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.380827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.390676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.390788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.390814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.390828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.390841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.390871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.400702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.400823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.400848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.400863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.400876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.400905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.410703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.410809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.410835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.410850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.410868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.410898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.420754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.420871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.420898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.420913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.420931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.420961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.323 [2024-07-24 09:19:24.430750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.323 [2024-07-24 09:19:24.430855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.323 [2024-07-24 09:19:24.430882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.323 [2024-07-24 09:19:24.430896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.323 [2024-07-24 09:19:24.430909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.323 [2024-07-24 09:19:24.430939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.323 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-24 09:19:24.440787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.582 [2024-07-24 09:19:24.440912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.582 [2024-07-24 09:19:24.440940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.582 [2024-07-24 09:19:24.440960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.582 [2024-07-24 09:19:24.440985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.582 [2024-07-24 09:19:24.441019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-24 09:19:24.450830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.582 [2024-07-24 09:19:24.450947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.582 [2024-07-24 09:19:24.450972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.582 [2024-07-24 09:19:24.450986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.582 [2024-07-24 09:19:24.450999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.582 [2024-07-24 09:19:24.451028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.582 [2024-07-24 09:19:24.460890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.582 [2024-07-24 09:19:24.461008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.582 [2024-07-24 09:19:24.461034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.582 [2024-07-24 09:19:24.461049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.582 [2024-07-24 09:19:24.461062] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.582 [2024-07-24 09:19:24.461092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.582 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.470877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.470991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.471018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.471033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.471047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.471076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.480902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.481067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.481093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.481115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.481130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.481160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.490917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.491025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.491051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.491066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.491079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.491117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.500954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.501096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.501128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.501149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.501163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.501194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.511005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.511129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.511157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.511172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.511185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.511215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.521018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.521157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.521184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.521198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.521212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.521243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.531049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.531224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.531251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.531265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.531279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.531310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.541130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.541274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.541300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.541314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.541327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.541359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.551110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.551235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.551261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.551276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.551290] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.551321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.561168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.561287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.561313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.561328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.561341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.561384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.571231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.571343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.571370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.571385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.571398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.571428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.581207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.581312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.581338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.581353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.581366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.581398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.591205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.591319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.583 [2024-07-24 09:19:24.591350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.583 [2024-07-24 09:19:24.591365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.583 [2024-07-24 09:19:24.591379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.583 [2024-07-24 09:19:24.591409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.583 qpair failed and we were unable to recover it. 00:33:46.583 [2024-07-24 09:19:24.601312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.583 [2024-07-24 09:19:24.601424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.601449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.601464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.601477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.601519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.611285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.611397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.611423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.611438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.611451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.611481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.621319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.621438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.621464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.621480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.621493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.621523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.631377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.631491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.631518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.631533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.631546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.631582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.641434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.641559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.641585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.641600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.641614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.641644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.651424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.651552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.651578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.651594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.651608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.651651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.661460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.661583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.661608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.661625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.661638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.661669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.671449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.671579] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.671608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.671623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.671636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.671665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.681503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.681625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.681657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.681675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.681688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.681719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.584 [2024-07-24 09:19:24.691514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.584 [2024-07-24 09:19:24.691630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.584 [2024-07-24 09:19:24.691656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.584 [2024-07-24 09:19:24.691671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.584 [2024-07-24 09:19:24.691685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.584 [2024-07-24 09:19:24.691715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.584 qpair failed and we were unable to recover it. 00:33:46.842 [2024-07-24 09:19:24.701534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.842 [2024-07-24 09:19:24.701651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.842 [2024-07-24 09:19:24.701684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.842 [2024-07-24 09:19:24.701710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.701729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.701771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.711602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.711729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.711756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.711770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.711784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.711813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.721640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.721758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.721784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.721799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.721813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.721849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.731639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.731752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.731778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.731793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.731807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.731837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.741711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.741881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.741908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.741922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.741936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.741966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.751663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.751772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.751798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.751813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.751826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.751856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.761774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.761895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.761922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.761936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.761949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.761980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.771753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.771865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.771899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.771914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.771928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.771959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.781767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.781883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.781909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.781924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.781937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.781967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.791827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.791950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.791976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.791991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.792005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.792035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.801836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.801956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.801982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.801996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.802010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.802040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.811867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.811978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.812005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.812020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.812038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.812069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.821882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.821991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.822017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.822032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.822045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.822089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.843 [2024-07-24 09:19:24.831925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.843 [2024-07-24 09:19:24.832039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.843 [2024-07-24 09:19:24.832065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.843 [2024-07-24 09:19:24.832079] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.843 [2024-07-24 09:19:24.832093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.843 [2024-07-24 09:19:24.832141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.843 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.841980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.842114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.842140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.842154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.842168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.842198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.851992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.852116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.852143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.852157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.852170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.852202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.862006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.862136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.862163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.862178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.862191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.862223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.872026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.872139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.872165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.872179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.872193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.872222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.882053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.882207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.882233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.882247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.882261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.882292] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.892069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.892210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.892236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.892251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.892265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.892295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.902140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.902288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.902315] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.902338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.902353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.902395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.912157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.912270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.912296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.912312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.912325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.912355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.922158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.922269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.922295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.922309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.922323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.922352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.932203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.932316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.932342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.932357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.932370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.932400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.942247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.942356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.942382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.942397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.942410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.942440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:46.844 [2024-07-24 09:19:24.952242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:46.844 [2024-07-24 09:19:24.952350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:46.844 [2024-07-24 09:19:24.952376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:46.844 [2024-07-24 09:19:24.952391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:46.844 [2024-07-24 09:19:24.952405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:46.844 [2024-07-24 09:19:24.952436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:46.844 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:24.962289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:24.962412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:24.962438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:24.962454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:24.962467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:24.962503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:24.972305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:24.972442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:24.972469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:24.972483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:24.972497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:24.972527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:24.982369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:24.982491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:24.982517] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:24.982532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:24.982545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:24.982575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:24.992403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:24.992520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:24.992546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:24.992567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:24.992580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:24.992611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.002443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.002563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.002589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.002604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.002617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.002648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.012403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.012515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.012541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.012555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.012568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.012598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.022484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.022604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.022630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.022645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.022659] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.022688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.032473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.032598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.032624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.032639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.032652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.032681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.042514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.042633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.042659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.042674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.042687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.042718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.052576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.052700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.052727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.052741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.052754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.052784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.062567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.062721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.062747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.062761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.062775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.062816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.072641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.072795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.072821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.072836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.072849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.072881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.082607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.082722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.082753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.082768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.082781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.082810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.092682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.092822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.092848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.092862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.092875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.092904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.102734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.102844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.102871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.102885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.102899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.102929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.112731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.112842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.112867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.112882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.112895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.103 [2024-07-24 09:19:25.112924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.103 qpair failed and we were unable to recover it. 00:33:47.103 [2024-07-24 09:19:25.122758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.103 [2024-07-24 09:19:25.122906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.103 [2024-07-24 09:19:25.122932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.103 [2024-07-24 09:19:25.122947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.103 [2024-07-24 09:19:25.122961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.122996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.132799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.132910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.132936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.132950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.132964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.132996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.142830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.142938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.142965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.142979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.142993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.143023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.152866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.152982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.153009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.153024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.153037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.153081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.162922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.163047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.163074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.163093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.163119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.163152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.172890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.173032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.173062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.173078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.173091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.173132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.182930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.183051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.183078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.183093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.183114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.183146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.192973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.193091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.193124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.193140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.193153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.193182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.203020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.203151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.203177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.203191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.203205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.203236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.104 [2024-07-24 09:19:25.213032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.104 [2024-07-24 09:19:25.213177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.104 [2024-07-24 09:19:25.213203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.104 [2024-07-24 09:19:25.213219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.104 [2024-07-24 09:19:25.213240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.104 [2024-07-24 09:19:25.213272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.104 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.223026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.223143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.223170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.223185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.223198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.223228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.233089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.233201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.233227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.233241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.233255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.233285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.243112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.243226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.243253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.243267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.243281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.243311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.253130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.253242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.253267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.253281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.253295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.253325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.263192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.263354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.263380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.263395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.263408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.263437] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.273250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.273360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.273386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.273401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.273414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.273445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.283223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.283339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.283365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.283380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.283393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.283423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.293246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.293366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.293391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.293406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.293419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.293450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.303293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.303405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.303434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.303454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.303471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.303501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.313333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.313455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.313481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.313496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.313510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.313541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.323378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.323530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.323556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.323571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.323584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.323625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.333361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.333477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.333503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.333518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.333531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.333561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.363 [2024-07-24 09:19:25.343406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.363 [2024-07-24 09:19:25.343520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.363 [2024-07-24 09:19:25.343546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.363 [2024-07-24 09:19:25.343561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.363 [2024-07-24 09:19:25.343574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.363 [2024-07-24 09:19:25.343604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.363 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.353409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.353545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.353572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.353586] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.353599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.353631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.363448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.363562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.363588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.363603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.363616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.363646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.373548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.373660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.373689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.373706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.373719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.373764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.383498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.383609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.383636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.383651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.383665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.383694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.393521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.393623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.393649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.393670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.393684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.393736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.403650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.403807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.403833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.403847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.403861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.403903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.413581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.413694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.413719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.413734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.413747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.413777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.423637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.423744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.423771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.423786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.423800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.423832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.433651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.433763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.433789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.433803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.433817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.433849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.443736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.443856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.443882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.443896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.443910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.443939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.453765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.453913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.453938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.453952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.453964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.454005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.463762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.463878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.364 [2024-07-24 09:19:25.463904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.364 [2024-07-24 09:19:25.463919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.364 [2024-07-24 09:19:25.463932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.364 [2024-07-24 09:19:25.463963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.364 qpair failed and we were unable to recover it. 00:33:47.364 [2024-07-24 09:19:25.473788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.364 [2024-07-24 09:19:25.473895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.365 [2024-07-24 09:19:25.473923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.365 [2024-07-24 09:19:25.473951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.365 [2024-07-24 09:19:25.473967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.365 [2024-07-24 09:19:25.473999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.365 qpair failed and we were unable to recover it. 00:33:47.623 [2024-07-24 09:19:25.483779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.623 [2024-07-24 09:19:25.483895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.623 [2024-07-24 09:19:25.483927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.623 [2024-07-24 09:19:25.483942] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.623 [2024-07-24 09:19:25.483955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.623 [2024-07-24 09:19:25.483985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.623 qpair failed and we were unable to recover it. 00:33:47.623 [2024-07-24 09:19:25.493815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.493925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.493951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.493966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.493979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.494008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.503848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.503979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.504008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.504024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.504040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.504073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.513908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.514020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.514046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.514061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.514074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.514128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.523932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.524060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.524086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.524109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.524131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.524169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.533933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.534048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.534075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.534089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.534110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.534143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.543956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.544114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.544140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.544155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.544168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.544198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.554003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.554159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.554186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.554200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.554213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.554244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.564018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.564145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.564172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.564187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.564200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.564230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.574046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.574160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.574191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.574206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.574220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.574250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.584082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.584213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.584240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.584254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.584271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.584314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.594090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.594213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.594239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.594254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.594267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.594299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.604161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.604279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.604305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.604320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.604334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.604363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.614189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.614308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.624 [2024-07-24 09:19:25.614334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.624 [2024-07-24 09:19:25.614348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.624 [2024-07-24 09:19:25.614367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.624 [2024-07-24 09:19:25.614399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.624 qpair failed and we were unable to recover it. 00:33:47.624 [2024-07-24 09:19:25.624222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.624 [2024-07-24 09:19:25.624377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.624403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.624418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.624431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.624461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.634336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.634454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.634480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.634494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.634508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.634538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.644311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.644422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.644448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.644462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.644476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.644505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.654322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.654432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.654458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.654473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.654486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.654515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.664321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.664438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.664465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.664479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.664492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.664522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.674313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.674427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.674453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.674468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.674482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.674512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.684422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.684541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.684568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.684582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.684596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.684640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.694405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.694515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.694542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.694557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.694570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.694612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.704406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.704515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.704542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.704556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.704575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.704606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.714455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.714565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.714592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.714606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.714620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.714649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.724469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.724589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.724615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.724629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.724643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.724673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.625 [2024-07-24 09:19:25.734484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.625 [2024-07-24 09:19:25.734612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.625 [2024-07-24 09:19:25.734639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.625 [2024-07-24 09:19:25.734654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.625 [2024-07-24 09:19:25.734667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.625 [2024-07-24 09:19:25.734697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.625 qpair failed and we were unable to recover it. 00:33:47.884 [2024-07-24 09:19:25.744560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.884 [2024-07-24 09:19:25.744668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.884 [2024-07-24 09:19:25.744696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.884 [2024-07-24 09:19:25.744710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.884 [2024-07-24 09:19:25.744724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.884 [2024-07-24 09:19:25.744755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.884 qpair failed and we were unable to recover it. 00:33:47.884 [2024-07-24 09:19:25.754578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.884 [2024-07-24 09:19:25.754706] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.884 [2024-07-24 09:19:25.754733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.884 [2024-07-24 09:19:25.754747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.884 [2024-07-24 09:19:25.754761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.884 [2024-07-24 09:19:25.754793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.884 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.764620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.764763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.764789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.764804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.764817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.764847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.774609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.774724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.774750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.774765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.774778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.774808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.784631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.784746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.784772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.784786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.784799] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.784829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.794703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.794825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.794853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.794874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.794888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.794921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.804719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.804837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.804863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.804878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.804891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.804923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.814750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.814865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.814891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.814906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.814920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.814950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.824763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.824897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.824923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.824938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.824952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.824982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.834801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.834915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.834942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.834957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.834970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.835000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.844837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.844956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.844982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.844996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.845010] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.845040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.854886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.855001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.855026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.855041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.855054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.855083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.864878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.864991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.865018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.865032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.865045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.865075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.874886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.875001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.875026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.875043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.875056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.875086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.884955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.885 [2024-07-24 09:19:25.885084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.885 [2024-07-24 09:19:25.885131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.885 [2024-07-24 09:19:25.885147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.885 [2024-07-24 09:19:25.885161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.885 [2024-07-24 09:19:25.885191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.885 qpair failed and we were unable to recover it. 00:33:47.885 [2024-07-24 09:19:25.894944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.895097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.895130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.895145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.895159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.895189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.905004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.905122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.905149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.905164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.905178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.905209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.915031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.915165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.915192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.915207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.915220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.915249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.925085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.925229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.925254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.925269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.925283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.925319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.935093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.935218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.935245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.935260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.935276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.935309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.945141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.945260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.945287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.945302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.945315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.945357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.955140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.955260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.955287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.955301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.955315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.955344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.965188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.965307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.965333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.965347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.965361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.965390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.975182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.975289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.975320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.975335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.975348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.975378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.985216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.985325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.985351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.985367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.985380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.985410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:47.886 [2024-07-24 09:19:25.995261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:47.886 [2024-07-24 09:19:25.995414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:47.886 [2024-07-24 09:19:25.995440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:47.886 [2024-07-24 09:19:25.995455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:47.886 [2024-07-24 09:19:25.995468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:47.886 [2024-07-24 09:19:25.995510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:47.886 qpair failed and we were unable to recover it. 00:33:48.145 [2024-07-24 09:19:26.005285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.145 [2024-07-24 09:19:26.005404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.145 [2024-07-24 09:19:26.005431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.145 [2024-07-24 09:19:26.005446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.145 [2024-07-24 09:19:26.005459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.005489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.015339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.015452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.015478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.015493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.015506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.015542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.025317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.025434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.025460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.025475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.025488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.025518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.035360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.035475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.035500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.035515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.035529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.035558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.045407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.045526] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.045551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.045566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.045579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.045610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.055398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.055511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.055537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.055551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.055564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.055593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.065469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.065601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.065627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.065642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.065655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.065685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.075500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.075626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.075652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.075667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.075680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.075710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.085491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.085604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.085630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.085645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.085658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.085688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.095560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.095688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.095714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.095729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.095742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.095772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.105572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.105678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.105704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.105718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.105737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.105768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.115623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.115748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.115774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.115790] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.115805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.115836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.125651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.125772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.125799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.125814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.125830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.125860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.146 [2024-07-24 09:19:26.135669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.146 [2024-07-24 09:19:26.135779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.146 [2024-07-24 09:19:26.135805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.146 [2024-07-24 09:19:26.135820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.146 [2024-07-24 09:19:26.135834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.146 [2024-07-24 09:19:26.135864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.146 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.145653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.145771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.145797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.145812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.145825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.145857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.155726] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.155837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.155864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.155879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.155892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.155922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.165751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.165872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.165898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.165913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.165927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.165957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.175752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.175862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.175888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.175902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.175916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.175945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.185807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.185931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.185957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.185971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.185984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.186014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.195807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.195916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.195942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.195963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.195977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.196007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.205850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.205967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.205993] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.206007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.206021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.206051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.215900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.216065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.216091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.216113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.216127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.216158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.225896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.226007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.226033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.226048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.226061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.226090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.235939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.236074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.236100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.236132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.236147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.236178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.245985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.246108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.246137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.246152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.246165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.246208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.147 [2024-07-24 09:19:26.255987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.147 [2024-07-24 09:19:26.256115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.147 [2024-07-24 09:19:26.256141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.147 [2024-07-24 09:19:26.256155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.147 [2024-07-24 09:19:26.256169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.147 [2024-07-24 09:19:26.256199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.147 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.266053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.266215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.266242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.266257] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.266270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.266300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.276039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.276160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.276187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.276201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.276214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.276244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.286127] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.286275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.286305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.286322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.286335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.286365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.296100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.296220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.296246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.296260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.296273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.296303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.306141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.306254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.306280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.306295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.306308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.306337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.316154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.316272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.316297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.316312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.316325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.316355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.326237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.326400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.326426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.326441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.326454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.326490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.336219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.336335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.336361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.336375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.336389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.336419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.346273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.346392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.346418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.346433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.346446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.346476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.356272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.356403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.356429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.356443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.356456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.356485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.407 qpair failed and we were unable to recover it. 00:33:48.407 [2024-07-24 09:19:26.366320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.407 [2024-07-24 09:19:26.366442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.407 [2024-07-24 09:19:26.366467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.407 [2024-07-24 09:19:26.366482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.407 [2024-07-24 09:19:26.366495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.407 [2024-07-24 09:19:26.366525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.376358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.376479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.376510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.376525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.376538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.376568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.386409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.386560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.386586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.386601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.386614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.386644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.396414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.396525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.396551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.396566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.396579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.396611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.406410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.406524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.406549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.406563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.406577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.406606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.416452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.416565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.416590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.416605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.416618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.416652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.426486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.426642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.426667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.426682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.426694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.426724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.436520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.436638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.436664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.436679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.436692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.436721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.446546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.446664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.446689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.446704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.446717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.446746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.456533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.456675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.456699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.456714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.456726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.456755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.466568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.466682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.466715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.466731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.466745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.466774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.476608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.476711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.476737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.476752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.476765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.476795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.486675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.486789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.486815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.486829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.486842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.486872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.408 [2024-07-24 09:19:26.496669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.408 [2024-07-24 09:19:26.496779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.408 [2024-07-24 09:19:26.496804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.408 [2024-07-24 09:19:26.496819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.408 [2024-07-24 09:19:26.496833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.408 [2024-07-24 09:19:26.496862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.408 qpair failed and we were unable to recover it. 00:33:48.409 [2024-07-24 09:19:26.506667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.409 [2024-07-24 09:19:26.506808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.409 [2024-07-24 09:19:26.506834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.409 [2024-07-24 09:19:26.506849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.409 [2024-07-24 09:19:26.506868] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.409 [2024-07-24 09:19:26.506899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.409 qpair failed and we were unable to recover it. 00:33:48.409 [2024-07-24 09:19:26.516716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.409 [2024-07-24 09:19:26.516844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.409 [2024-07-24 09:19:26.516871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.409 [2024-07-24 09:19:26.516886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.409 [2024-07-24 09:19:26.516903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.409 [2024-07-24 09:19:26.516947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.409 qpair failed and we were unable to recover it. 00:33:48.667 [2024-07-24 09:19:26.526759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.667 [2024-07-24 09:19:26.526885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.667 [2024-07-24 09:19:26.526912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.667 [2024-07-24 09:19:26.526932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.667 [2024-07-24 09:19:26.526946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.667 [2024-07-24 09:19:26.526977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.667 qpair failed and we were unable to recover it. 00:33:48.667 [2024-07-24 09:19:26.536758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.667 [2024-07-24 09:19:26.536871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.667 [2024-07-24 09:19:26.536897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.667 [2024-07-24 09:19:26.536912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.667 [2024-07-24 09:19:26.536925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.667 [2024-07-24 09:19:26.536955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.667 qpair failed and we were unable to recover it. 00:33:48.667 [2024-07-24 09:19:26.546803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.667 [2024-07-24 09:19:26.546915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.667 [2024-07-24 09:19:26.546941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.667 [2024-07-24 09:19:26.546955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.667 [2024-07-24 09:19:26.546968] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.667 [2024-07-24 09:19:26.547009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.667 qpair failed and we were unable to recover it. 00:33:48.667 [2024-07-24 09:19:26.556849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.667 [2024-07-24 09:19:26.556974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.667 [2024-07-24 09:19:26.557002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.667 [2024-07-24 09:19:26.557020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.667 [2024-07-24 09:19:26.557035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.667 [2024-07-24 09:19:26.557068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.667 qpair failed and we were unable to recover it. 00:33:48.667 [2024-07-24 09:19:26.566846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.667 [2024-07-24 09:19:26.566957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.667 [2024-07-24 09:19:26.566983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.667 [2024-07-24 09:19:26.566998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.667 [2024-07-24 09:19:26.567011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.667 [2024-07-24 09:19:26.567040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.667 qpair failed and we were unable to recover it. 00:33:48.667 [2024-07-24 09:19:26.576868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.576981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.577007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.577022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.577035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.577065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.586891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.586998] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.587024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.587039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.587052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.587082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.596909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.597019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.597045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.597065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.597079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.597116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.606982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.607108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.607134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.607149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.607162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.607193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.616983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.617097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.617134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.617151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.617165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.617195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.627048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.627174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.627200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.627215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.627228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.627258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.637053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.637187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.637213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.637227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.637241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.637271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.647061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.647199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.647226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.647240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.647253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7428000b90 00:33:48.668 [2024-07-24 09:19:26.647284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.657135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.657254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.657287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.657312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.657337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:33:48.668 [2024-07-24 09:19:26.657388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.667191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.667326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.667355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.667380] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.667404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:33:48.668 [2024-07-24 09:19:26.667451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.677217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:33:48.668 [2024-07-24 09:19:26.677345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:33:48.668 [2024-07-24 09:19:26.677374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:33:48.668 [2024-07-24 09:19:26.677406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:33:48.668 [2024-07-24 09:19:26.677433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f7418000b90 00:33:48.668 [2024-07-24 09:19:26.677493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:33:48.668 qpair failed and we were unable to recover it. 00:33:48.668 [2024-07-24 09:19:26.677652] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:33:48.668 A controller has encountered a failure and is being reset. 00:33:48.927 Controller properly reset. 00:33:48.927 Initializing NVMe Controllers 00:33:48.927 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.927 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:33:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:33:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:33:48.927 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:33:48.927 Initialization complete. Launching workers. 00:33:48.927 Starting thread on core 1 00:33:48.927 Starting thread on core 2 00:33:48.927 Starting thread on core 3 00:33:48.927 Starting thread on core 0 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:33:48.927 00:33:48.927 real 0m10.935s 00:33:48.927 user 0m18.122s 00:33:48.927 sys 0m5.530s 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:33:48.927 ************************************ 00:33:48.927 END TEST nvmf_target_disconnect_tc2 00:33:48.927 ************************************ 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:48.927 rmmod nvme_tcp 00:33:48.927 rmmod nvme_fabrics 00:33:48.927 rmmod nvme_keyring 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3925570 ']' 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3925570 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3925570 ']' 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3925570 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3925570 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3925570' 00:33:48.927 killing process with pid 3925570 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3925570 00:33:48.927 09:19:26 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3925570 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:49.186 09:19:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.722 09:19:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:51.722 00:33:51.722 real 0m15.698s 00:33:51.722 user 0m44.921s 00:33:51.722 sys 0m7.514s 00:33:51.722 09:19:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:51.722 09:19:29 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:33:51.722 ************************************ 00:33:51.722 END TEST nvmf_target_disconnect 00:33:51.722 ************************************ 00:33:51.722 09:19:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:51.722 00:33:51.722 real 6m31.128s 00:33:51.722 user 16m35.160s 00:33:51.722 sys 1m26.721s 00:33:51.722 09:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:51.722 09:19:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:51.722 ************************************ 00:33:51.722 END TEST nvmf_host 00:33:51.722 ************************************ 00:33:51.722 00:33:51.722 real 27m5.927s 00:33:51.722 user 73m17.201s 00:33:51.722 sys 6m31.501s 00:33:51.722 09:19:29 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:51.722 09:19:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.722 ************************************ 00:33:51.722 END TEST nvmf_tcp 00:33:51.722 ************************************ 00:33:51.722 09:19:29 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:33:51.722 09:19:29 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:51.722 09:19:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:51.722 09:19:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:51.723 09:19:29 -- common/autotest_common.sh@10 -- # set +x 00:33:51.723 ************************************ 00:33:51.723 START TEST spdkcli_nvmf_tcp 00:33:51.723 ************************************ 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:51.723 * Looking for test storage... 00:33:51.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3926772 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3926772 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3926772 ']' 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:51.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.723 [2024-07-24 09:19:29.476889] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:33:51.723 [2024-07-24 09:19:29.476973] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3926772 ] 00:33:51.723 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.723 [2024-07-24 09:19:29.507467] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:33:51.723 [2024-07-24 09:19:29.534712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:51.723 [2024-07-24 09:19:29.620123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.723 [2024-07-24 09:19:29.620133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:51.723 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:51.724 09:19:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:51.724 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:51.724 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:51.724 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:51.724 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:51.724 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:51.724 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:51.724 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.724 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.724 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:51.724 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:51.724 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:51.724 ' 00:33:54.254 [2024-07-24 09:19:32.318970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.625 [2024-07-24 09:19:33.551355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:58.149 [2024-07-24 09:19:35.802370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:00.076 [2024-07-24 09:19:37.752569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:01.449 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:01.449 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:01.449 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:01.449 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:01.449 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:01.449 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:01.449 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:01.449 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:01.449 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:01.449 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:01.449 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:01.449 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:01.449 09:19:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:01.706 09:19:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.965 09:19:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:01.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:01.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:01.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:01.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:01.965 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:01.965 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:01.965 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:01.965 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:01.965 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:01.965 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:01.965 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:01.965 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:01.965 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:01.965 ' 00:34:07.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:07.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:07.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:07.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:07.229 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:07.229 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:07.229 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:07.229 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:07.229 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:07.229 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:07.229 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:07.229 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:07.229 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3926772 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3926772 ']' 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3926772 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3926772 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3926772' 00:34:07.229 killing process with pid 3926772 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3926772 00:34:07.229 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3926772 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3926772 ']' 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3926772 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3926772 ']' 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3926772 00:34:07.487 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3926772) - No such process 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3926772 is not found' 00:34:07.487 Process with pid 3926772 is not found 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:07.487 00:34:07.487 real 0m16.027s 00:34:07.487 user 0m33.930s 00:34:07.487 sys 0m0.847s 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:07.487 09:19:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:07.487 ************************************ 00:34:07.487 END TEST spdkcli_nvmf_tcp 00:34:07.487 ************************************ 00:34:07.487 09:19:45 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:07.487 09:19:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:07.487 09:19:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:07.487 09:19:45 -- common/autotest_common.sh@10 -- # set +x 00:34:07.487 ************************************ 00:34:07.487 START TEST nvmf_identify_passthru 00:34:07.487 ************************************ 00:34:07.487 09:19:45 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:07.487 * Looking for test storage... 00:34:07.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:07.488 09:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.488 09:19:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.488 09:19:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.488 09:19:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:07.488 09:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:07.488 09:19:45 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:07.488 09:19:45 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:07.488 09:19:45 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:07.488 09:19:45 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:07.488 09:19:45 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:07.488 09:19:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:07.488 09:19:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:07.488 09:19:45 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:07.488 09:19:45 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:09.390 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:09.391 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:09.391 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:09.391 Found net devices under 0000:09:00.0: cvl_0_0 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:09.391 Found net devices under 0000:09:00.1: cvl_0_1 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:09.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:09.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:34:09.391 00:34:09.391 --- 10.0.0.2 ping statistics --- 00:34:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.391 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:09.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:09.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:34:09.391 00:34:09.391 --- 10.0.0.1 ping statistics --- 00:34:09.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:09.391 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:09.391 09:19:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:09.392 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:09.392 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:09.392 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:34:09.650 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:34:09.650 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:0b:00.0 00:34:09.650 09:19:47 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:0b:00.0 00:34:09.650 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:09.650 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:09.650 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:09.650 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:09.650 09:19:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:09.650 EAL: No free 2048 kB hugepages reported on node 1 00:34:13.859 09:19:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:13.859 09:19:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:13.859 09:19:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:13.859 09:19:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:13.859 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3931259 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:18.044 09:19:55 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3931259 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3931259 ']' 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:18.044 09:19:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.044 [2024-07-24 09:19:55.875272] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:34:18.044 [2024-07-24 09:19:55.875355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:18.044 EAL: No free 2048 kB hugepages reported on node 1 00:34:18.044 [2024-07-24 09:19:55.913118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:18.044 [2024-07-24 09:19:55.945116] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:18.044 [2024-07-24 09:19:56.033790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:18.044 [2024-07-24 09:19:56.033842] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:18.044 [2024-07-24 09:19:56.033858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:18.044 [2024-07-24 09:19:56.033872] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:18.044 [2024-07-24 09:19:56.033884] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:18.044 [2024-07-24 09:19:56.033970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:18.044 [2024-07-24 09:19:56.034032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:18.044 [2024-07-24 09:19:56.034192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:18.044 [2024-07-24 09:19:56.034195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:18.044 09:19:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.044 INFO: Log level set to 20 00:34:18.044 INFO: Requests: 00:34:18.044 { 00:34:18.044 "jsonrpc": "2.0", 00:34:18.044 "method": "nvmf_set_config", 00:34:18.044 "id": 1, 00:34:18.044 "params": { 00:34:18.044 "admin_cmd_passthru": { 00:34:18.044 "identify_ctrlr": true 00:34:18.044 } 00:34:18.044 } 00:34:18.044 } 00:34:18.044 00:34:18.044 INFO: response: 00:34:18.044 { 00:34:18.044 "jsonrpc": "2.0", 00:34:18.044 "id": 1, 00:34:18.044 "result": true 00:34:18.044 } 00:34:18.044 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.044 09:19:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.044 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.044 INFO: Setting log level to 20 00:34:18.044 INFO: Setting log level to 20 00:34:18.044 INFO: Log level set to 20 00:34:18.044 INFO: Log level set to 20 00:34:18.044 INFO: Requests: 00:34:18.044 { 00:34:18.044 "jsonrpc": "2.0", 00:34:18.044 "method": "framework_start_init", 00:34:18.044 "id": 1 00:34:18.044 } 00:34:18.044 00:34:18.044 INFO: Requests: 00:34:18.044 { 00:34:18.044 "jsonrpc": "2.0", 00:34:18.044 "method": "framework_start_init", 00:34:18.044 "id": 1 00:34:18.044 } 00:34:18.044 00:34:18.302 [2024-07-24 09:19:56.185287] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:18.302 INFO: response: 00:34:18.302 { 00:34:18.302 "jsonrpc": "2.0", 00:34:18.302 "id": 1, 00:34:18.302 "result": true 00:34:18.302 } 00:34:18.302 00:34:18.302 INFO: response: 00:34:18.302 { 00:34:18.302 "jsonrpc": "2.0", 00:34:18.302 "id": 1, 00:34:18.302 "result": true 00:34:18.302 } 00:34:18.302 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.302 09:19:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.302 INFO: Setting log level to 40 00:34:18.302 INFO: Setting log level to 40 00:34:18.302 INFO: Setting log level to 40 00:34:18.302 [2024-07-24 09:19:56.195259] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:18.302 09:19:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:18.302 09:19:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:18.302 09:19:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.582 Nvme0n1 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.582 [2024-07-24 09:19:59.084360] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.582 [ 00:34:21.582 { 00:34:21.582 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:21.582 "subtype": "Discovery", 00:34:21.582 "listen_addresses": [], 00:34:21.582 "allow_any_host": true, 00:34:21.582 "hosts": [] 00:34:21.582 }, 00:34:21.582 { 00:34:21.582 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:21.582 "subtype": "NVMe", 00:34:21.582 "listen_addresses": [ 00:34:21.582 { 00:34:21.582 "trtype": "TCP", 00:34:21.582 "adrfam": "IPv4", 00:34:21.582 "traddr": "10.0.0.2", 00:34:21.582 "trsvcid": "4420" 00:34:21.582 } 00:34:21.582 ], 00:34:21.582 "allow_any_host": true, 00:34:21.582 "hosts": [], 00:34:21.582 "serial_number": "SPDK00000000000001", 00:34:21.582 "model_number": "SPDK bdev Controller", 00:34:21.582 "max_namespaces": 1, 00:34:21.582 "min_cntlid": 1, 00:34:21.582 "max_cntlid": 65519, 00:34:21.582 "namespaces": [ 00:34:21.582 { 00:34:21.582 "nsid": 1, 00:34:21.582 "bdev_name": "Nvme0n1", 00:34:21.582 "name": "Nvme0n1", 00:34:21.582 "nguid": "9A85F03E6D594C65A6384CAEA90DFB3D", 00:34:21.582 "uuid": "9a85f03e-6d59-4c65-a638-4caea90dfb3d" 00:34:21.582 } 00:34:21.582 ] 00:34:21.582 } 00:34:21.582 ] 00:34:21.582 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:21.582 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:21.582 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:21.582 EAL: No free 2048 kB hugepages reported on node 1 00:34:21.583 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:21.583 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.583 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:21.583 09:19:59 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:21.583 rmmod nvme_tcp 00:34:21.583 rmmod nvme_fabrics 00:34:21.583 rmmod nvme_keyring 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3931259 ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3931259 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3931259 ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3931259 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3931259 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3931259' 00:34:21.583 killing process with pid 3931259 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3931259 00:34:21.583 09:19:59 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3931259 00:34:22.957 09:20:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:22.957 09:20:00 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:22.957 09:20:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:22.957 09:20:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:22.957 09:20:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:22.957 09:20:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.957 09:20:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:22.957 09:20:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.492 09:20:03 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:25.492 00:34:25.492 real 0m17.596s 00:34:25.492 user 0m26.118s 00:34:25.492 sys 0m2.164s 00:34:25.492 09:20:03 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:25.492 09:20:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:25.492 ************************************ 00:34:25.492 END TEST nvmf_identify_passthru 00:34:25.492 ************************************ 00:34:25.492 09:20:03 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:25.492 09:20:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:25.492 09:20:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:25.492 09:20:03 -- common/autotest_common.sh@10 -- # set +x 00:34:25.492 ************************************ 00:34:25.492 START TEST nvmf_dif 00:34:25.492 ************************************ 00:34:25.492 09:20:03 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:25.492 * Looking for test storage... 00:34:25.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:25.492 09:20:03 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.492 09:20:03 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.492 09:20:03 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.492 09:20:03 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.492 09:20:03 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.492 09:20:03 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.492 09:20:03 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.492 09:20:03 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:25.492 09:20:03 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:25.492 09:20:03 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:25.492 09:20:03 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:25.492 09:20:03 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:25.492 09:20:03 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:25.492 09:20:03 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.492 09:20:03 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:25.492 09:20:03 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:25.492 09:20:03 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:25.492 09:20:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:27.393 09:20:05 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:27.394 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:27.394 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:27.394 Found net devices under 0000:09:00.0: cvl_0_0 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:27.394 Found net devices under 0000:09:00.1: cvl_0_1 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:27.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:27.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:34:27.394 00:34:27.394 --- 10.0.0.2 ping statistics --- 00:34:27.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.394 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:27.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:27.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:34:27.394 00:34:27.394 --- 10.0.0.1 ping statistics --- 00:34:27.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:27.394 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:27.394 09:20:05 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:28.329 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:28.329 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:28.329 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:28.329 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:28.329 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:28.329 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:28.329 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:28.329 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:28.329 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:28.329 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:28.329 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:28.329 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:28.329 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:28.329 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:28.329 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:28.329 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:28.329 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:28.587 09:20:06 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:28.587 09:20:06 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3934401 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:28.587 09:20:06 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3934401 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3934401 ']' 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:28.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:28.587 09:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.587 [2024-07-24 09:20:06.603020] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:34:28.588 [2024-07-24 09:20:06.603130] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:28.588 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.588 [2024-07-24 09:20:06.641242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:34:28.588 [2024-07-24 09:20:06.672886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:28.846 [2024-07-24 09:20:06.759140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:28.846 [2024-07-24 09:20:06.759203] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:28.846 [2024-07-24 09:20:06.759227] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:28.846 [2024-07-24 09:20:06.759241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:28.846 [2024-07-24 09:20:06.759253] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:28.846 [2024-07-24 09:20:06.759291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:28.846 09:20:06 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.846 09:20:06 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:28.846 09:20:06 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:28.846 09:20:06 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.846 [2024-07-24 09:20:06.912307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.846 09:20:06 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:28.846 09:20:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:28.846 ************************************ 00:34:28.846 START TEST fio_dif_1_default 00:34:28.846 ************************************ 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:28.846 bdev_null0 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:28.846 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:29.112 [2024-07-24 09:20:06.972650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:29.112 { 00:34:29.112 "params": { 00:34:29.112 "name": "Nvme$subsystem", 00:34:29.112 "trtype": "$TEST_TRANSPORT", 00:34:29.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:29.112 "adrfam": "ipv4", 00:34:29.112 "trsvcid": "$NVMF_PORT", 00:34:29.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:29.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:29.112 "hdgst": ${hdgst:-false}, 00:34:29.112 "ddgst": ${ddgst:-false} 00:34:29.112 }, 00:34:29.112 "method": "bdev_nvme_attach_controller" 00:34:29.112 } 00:34:29.112 EOF 00:34:29.112 )") 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.112 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:29.113 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:34:29.113 09:20:06 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:29.113 09:20:06 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:29.113 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:29.113 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:29.113 09:20:06 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:29.113 "params": { 00:34:29.113 "name": "Nvme0", 00:34:29.113 "trtype": "tcp", 00:34:29.113 "traddr": "10.0.0.2", 00:34:29.113 "adrfam": "ipv4", 00:34:29.113 "trsvcid": "4420", 00:34:29.113 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:29.113 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:29.113 "hdgst": false, 00:34:29.113 "ddgst": false 00:34:29.113 }, 00:34:29.113 "method": "bdev_nvme_attach_controller" 00:34:29.113 }' 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:29.113 09:20:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:29.371 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:29.371 fio-3.35 00:34:29.371 Starting 1 thread 00:34:29.371 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.591 00:34:41.591 filename0: (groupid=0, jobs=1): err= 0: pid=3934629: Wed Jul 24 09:20:17 2024 00:34:41.591 read: IOPS=189, BW=758KiB/s (777kB/s)(7584KiB/10001msec) 00:34:41.591 slat (nsec): min=4451, max=76869, avg=10020.71, stdev=3565.09 00:34:41.591 clat (usec): min=668, max=47666, avg=21067.50, stdev=20239.64 00:34:41.591 lat (usec): min=676, max=47705, avg=21077.52, stdev=20239.98 00:34:41.591 clat percentiles (usec): 00:34:41.591 | 1.00th=[ 693], 5.00th=[ 709], 10.00th=[ 717], 20.00th=[ 734], 00:34:41.591 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[41157], 60.00th=[41157], 00:34:41.591 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:41.591 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47449], 99.95th=[47449], 00:34:41.591 | 99.99th=[47449] 00:34:41.591 bw ( KiB/s): min= 670, max= 768, per=100.00%, avg=759.47, stdev=23.89, samples=19 00:34:41.591 iops : min= 167, max= 192, avg=189.84, stdev= 6.08, samples=19 00:34:41.591 lat (usec) : 750=25.21%, 1000=24.58% 00:34:41.591 lat (msec) : 50=50.21% 00:34:41.591 cpu : usr=89.60%, sys=10.13%, ctx=15, majf=0, minf=238 00:34:41.591 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:41.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:41.591 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:41.591 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:41.591 00:34:41.591 Run status group 0 (all jobs): 00:34:41.591 READ: bw=758KiB/s (777kB/s), 758KiB/s-758KiB/s (777kB/s-777kB/s), io=7584KiB (7766kB), run=10001-10001msec 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.591 00:34:41.591 real 0m11.103s 00:34:41.591 user 0m10.160s 00:34:41.591 sys 0m1.305s 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 ************************************ 00:34:41.591 END TEST fio_dif_1_default 00:34:41.591 ************************************ 00:34:41.591 09:20:18 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:41.591 09:20:18 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:41.591 09:20:18 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 ************************************ 00:34:41.591 START TEST fio_dif_1_multi_subsystems 00:34:41.591 ************************************ 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 bdev_null0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.591 [2024-07-24 09:20:18.129238] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:41.591 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.592 bdev_null1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:41.592 { 00:34:41.592 "params": { 00:34:41.592 "name": "Nvme$subsystem", 00:34:41.592 "trtype": "$TEST_TRANSPORT", 00:34:41.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.592 "adrfam": "ipv4", 00:34:41.592 "trsvcid": "$NVMF_PORT", 00:34:41.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.592 "hdgst": ${hdgst:-false}, 00:34:41.592 "ddgst": ${ddgst:-false} 00:34:41.592 }, 00:34:41.592 "method": "bdev_nvme_attach_controller" 00:34:41.592 } 00:34:41.592 EOF 00:34:41.592 )") 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:41.592 { 00:34:41.592 "params": { 00:34:41.592 "name": "Nvme$subsystem", 00:34:41.592 "trtype": "$TEST_TRANSPORT", 00:34:41.592 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:41.592 "adrfam": "ipv4", 00:34:41.592 "trsvcid": "$NVMF_PORT", 00:34:41.592 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:41.592 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:41.592 "hdgst": ${hdgst:-false}, 00:34:41.592 "ddgst": ${ddgst:-false} 00:34:41.592 }, 00:34:41.592 "method": "bdev_nvme_attach_controller" 00:34:41.592 } 00:34:41.592 EOF 00:34:41.592 )") 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:41.592 "params": { 00:34:41.592 "name": "Nvme0", 00:34:41.592 "trtype": "tcp", 00:34:41.592 "traddr": "10.0.0.2", 00:34:41.592 "adrfam": "ipv4", 00:34:41.592 "trsvcid": "4420", 00:34:41.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:41.592 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:41.592 "hdgst": false, 00:34:41.592 "ddgst": false 00:34:41.592 }, 00:34:41.592 "method": "bdev_nvme_attach_controller" 00:34:41.592 },{ 00:34:41.592 "params": { 00:34:41.592 "name": "Nvme1", 00:34:41.592 "trtype": "tcp", 00:34:41.592 "traddr": "10.0.0.2", 00:34:41.592 "adrfam": "ipv4", 00:34:41.592 "trsvcid": "4420", 00:34:41.592 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:41.592 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:41.592 "hdgst": false, 00:34:41.592 "ddgst": false 00:34:41.592 }, 00:34:41.592 "method": "bdev_nvme_attach_controller" 00:34:41.592 }' 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:41.592 09:20:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:41.592 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.592 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:41.592 fio-3.35 00:34:41.592 Starting 2 threads 00:34:41.592 EAL: No free 2048 kB hugepages reported on node 1 00:34:51.553 00:34:51.553 filename0: (groupid=0, jobs=1): err= 0: pid=3936029: Wed Jul 24 09:20:29 2024 00:34:51.553 read: IOPS=188, BW=755KiB/s (773kB/s)(7552KiB/10001msec) 00:34:51.553 slat (nsec): min=4513, max=37679, avg=9549.96, stdev=2754.98 00:34:51.553 clat (usec): min=641, max=48026, avg=21157.07, stdev=20262.65 00:34:51.553 lat (usec): min=649, max=48039, avg=21166.62, stdev=20262.75 00:34:51.553 clat percentiles (usec): 00:34:51.553 | 1.00th=[ 676], 5.00th=[ 685], 10.00th=[ 693], 20.00th=[ 709], 00:34:51.553 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[41157], 60.00th=[41157], 00:34:51.553 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:51.553 | 99.00th=[41157], 99.50th=[41157], 99.90th=[47973], 99.95th=[47973], 00:34:51.553 | 99.99th=[47973] 00:34:51.553 bw ( KiB/s): min= 672, max= 768, per=66.56%, avg=756.21, stdev=28.64, samples=19 00:34:51.553 iops : min= 168, max= 192, avg=189.05, stdev= 7.16, samples=19 00:34:51.553 lat (usec) : 750=33.95%, 1000=15.20% 00:34:51.553 lat (msec) : 2=0.42%, 50=50.42% 00:34:51.553 cpu : usr=94.08%, sys=5.64%, ctx=17, majf=0, minf=177 00:34:51.553 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.553 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.553 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.553 filename1: (groupid=0, jobs=1): err= 0: pid=3936030: Wed Jul 24 09:20:29 2024 00:34:51.553 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:34:51.553 slat (nsec): min=4368, max=22025, avg=9723.82, stdev=2835.42 00:34:51.553 clat (usec): min=41064, max=48069, avg=41986.63, stdev=443.58 00:34:51.553 lat (usec): min=41072, max=48086, avg=41996.35, stdev=443.66 00:34:51.553 clat percentiles (usec): 00:34:51.553 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:51.554 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:51.554 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:51.554 | 99.00th=[42730], 99.50th=[43254], 99.90th=[47973], 99.95th=[47973], 00:34:51.554 | 99.99th=[47973] 00:34:51.554 bw ( KiB/s): min= 352, max= 384, per=33.45%, avg=380.63, stdev=10.09, samples=19 00:34:51.554 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:34:51.554 lat (msec) : 50=100.00% 00:34:51.554 cpu : usr=94.48%, sys=5.21%, ctx=29, majf=0, minf=86 00:34:51.554 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.554 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.554 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.554 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:51.554 00:34:51.554 Run status group 0 (all jobs): 00:34:51.554 READ: bw=1136KiB/s (1163kB/s), 381KiB/s-755KiB/s (390kB/s-773kB/s), io=11.1MiB (11.6MB), run=10001-10001msec 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 00:34:51.554 real 0m11.201s 00:34:51.554 user 0m20.152s 00:34:51.554 sys 0m1.362s 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 ************************************ 00:34:51.554 END TEST fio_dif_1_multi_subsystems 00:34:51.554 ************************************ 00:34:51.554 09:20:29 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:51.554 09:20:29 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:51.554 09:20:29 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 ************************************ 00:34:51.554 START TEST fio_dif_rand_params 00:34:51.554 ************************************ 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 bdev_null0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:51.554 [2024-07-24 09:20:29.383548] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:51.554 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:51.555 { 00:34:51.555 "params": { 00:34:51.555 "name": "Nvme$subsystem", 00:34:51.555 "trtype": "$TEST_TRANSPORT", 00:34:51.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:51.555 "adrfam": "ipv4", 00:34:51.555 "trsvcid": "$NVMF_PORT", 00:34:51.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:51.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:51.555 "hdgst": ${hdgst:-false}, 00:34:51.555 "ddgst": ${ddgst:-false} 00:34:51.555 }, 00:34:51.555 "method": "bdev_nvme_attach_controller" 00:34:51.555 } 00:34:51.555 EOF 00:34:51.555 )") 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:51.555 "params": { 00:34:51.555 "name": "Nvme0", 00:34:51.555 "trtype": "tcp", 00:34:51.555 "traddr": "10.0.0.2", 00:34:51.555 "adrfam": "ipv4", 00:34:51.555 "trsvcid": "4420", 00:34:51.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:51.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:51.555 "hdgst": false, 00:34:51.555 "ddgst": false 00:34:51.555 }, 00:34:51.555 "method": "bdev_nvme_attach_controller" 00:34:51.555 }' 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:51.555 09:20:29 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:51.555 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:51.555 ... 00:34:51.555 fio-3.35 00:34:51.555 Starting 3 threads 00:34:51.813 EAL: No free 2048 kB hugepages reported on node 1 00:34:58.373 00:34:58.373 filename0: (groupid=0, jobs=1): err= 0: pid=3937423: Wed Jul 24 09:20:35 2024 00:34:58.373 read: IOPS=208, BW=26.1MiB/s (27.3MB/s)(131MiB/5005msec) 00:34:58.373 slat (nsec): min=7036, max=76825, avg=15212.25, stdev=5086.30 00:34:58.373 clat (usec): min=4793, max=90443, avg=14359.30, stdev=12245.90 00:34:58.373 lat (usec): min=4804, max=90461, avg=14374.52, stdev=12246.43 00:34:58.373 clat percentiles (usec): 00:34:58.373 | 1.00th=[ 5342], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 6521], 00:34:58.373 | 30.00th=[ 8356], 40.00th=[ 8848], 50.00th=[10683], 60.00th=[12780], 00:34:58.373 | 70.00th=[14877], 80.00th=[16909], 90.00th=[20841], 95.00th=[50594], 00:34:58.373 | 99.00th=[57410], 99.50th=[57934], 99.90th=[62129], 99.95th=[90702], 00:34:58.373 | 99.99th=[90702] 00:34:58.373 bw ( KiB/s): min=16384, max=34560, per=39.26%, avg=26649.60, stdev=5209.46, samples=10 00:34:58.373 iops : min= 128, max= 270, avg=208.20, stdev=40.70, samples=10 00:34:58.373 lat (msec) : 10=47.70%, 20=41.57%, 50=4.41%, 100=6.32% 00:34:58.373 cpu : usr=94.52%, sys=5.00%, ctx=19, majf=0, minf=173 00:34:58.373 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.373 issued rwts: total=1044,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.374 filename0: (groupid=0, jobs=1): err= 0: pid=3937424: Wed Jul 24 09:20:35 2024 00:34:58.374 read: IOPS=166, BW=20.8MiB/s (21.8MB/s)(105MiB/5021msec) 00:34:58.374 slat (usec): min=4, max=404, avg=19.36, stdev=25.35 00:34:58.374 clat (usec): min=4967, max=94842, avg=17966.48, stdev=15999.68 00:34:58.374 lat (usec): min=4980, max=94855, avg=17985.84, stdev=15999.92 00:34:58.374 clat percentiles (usec): 00:34:58.374 | 1.00th=[ 5342], 5.00th=[ 5866], 10.00th=[ 6718], 20.00th=[ 8455], 00:34:58.374 | 30.00th=[ 8979], 40.00th=[10028], 50.00th=[11863], 60.00th=[13829], 00:34:58.374 | 70.00th=[15401], 80.00th=[16909], 90.00th=[51643], 95.00th=[54789], 00:34:58.374 | 99.00th=[57410], 99.50th=[59507], 99.90th=[94897], 99.95th=[94897], 00:34:58.374 | 99.99th=[94897] 00:34:58.374 bw ( KiB/s): min=13056, max=28928, per=31.46%, avg=21350.40, stdev=6170.26, samples=10 00:34:58.374 iops : min= 102, max= 226, avg=166.80, stdev=48.21, samples=10 00:34:58.374 lat (msec) : 10=40.14%, 20=43.49%, 50=4.06%, 100=12.31% 00:34:58.374 cpu : usr=66.85%, sys=15.14%, ctx=309, majf=0, minf=141 00:34:58.374 IO depths : 1=1.9%, 2=98.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.374 issued rwts: total=837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.374 filename0: (groupid=0, jobs=1): err= 0: pid=3937425: Wed Jul 24 09:20:35 2024 00:34:58.374 read: IOPS=155, BW=19.5MiB/s (20.4MB/s)(98.0MiB/5026msec) 00:34:58.374 slat (nsec): min=4650, max=38387, avg=15053.16, stdev=4604.87 00:34:58.374 clat (usec): min=6172, max=56445, avg=19210.88, stdev=15949.12 00:34:58.374 lat (usec): min=6185, max=56464, avg=19225.94, stdev=15949.05 00:34:58.374 clat percentiles (usec): 00:34:58.374 | 1.00th=[ 6783], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[10028], 00:34:58.374 | 30.00th=[10552], 40.00th=[11207], 50.00th=[12387], 60.00th=[13435], 00:34:58.374 | 70.00th=[14484], 80.00th=[16057], 90.00th=[51643], 95.00th=[53740], 00:34:58.374 | 99.00th=[55313], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:34:58.374 | 99.99th=[56361] 00:34:58.374 bw ( KiB/s): min=16128, max=26880, per=29.46%, avg=19993.60, stdev=3683.10, samples=10 00:34:58.374 iops : min= 126, max= 210, avg=156.20, stdev=28.77, samples=10 00:34:58.374 lat (msec) : 10=20.41%, 20=60.71%, 50=3.19%, 100=15.69% 00:34:58.374 cpu : usr=94.65%, sys=4.92%, ctx=9, majf=0, minf=38 00:34:58.374 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:58.374 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.374 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:58.374 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:58.374 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:58.374 00:34:58.374 Run status group 0 (all jobs): 00:34:58.374 READ: bw=66.3MiB/s (69.5MB/s), 19.5MiB/s-26.1MiB/s (20.4MB/s-27.3MB/s), io=333MiB (349MB), run=5005-5026msec 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 bdev_null0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 [2024-07-24 09:20:35.586877] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 bdev_null1 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 bdev_null2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:58.374 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.375 { 00:34:58.375 "params": { 00:34:58.375 "name": "Nvme$subsystem", 00:34:58.375 "trtype": "$TEST_TRANSPORT", 00:34:58.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.375 "adrfam": "ipv4", 00:34:58.375 "trsvcid": "$NVMF_PORT", 00:34:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.375 "hdgst": ${hdgst:-false}, 00:34:58.375 "ddgst": ${ddgst:-false} 00:34:58.375 }, 00:34:58.375 "method": "bdev_nvme_attach_controller" 00:34:58.375 } 00:34:58.375 EOF 00:34:58.375 )") 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.375 { 00:34:58.375 "params": { 00:34:58.375 "name": "Nvme$subsystem", 00:34:58.375 "trtype": "$TEST_TRANSPORT", 00:34:58.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.375 "adrfam": "ipv4", 00:34:58.375 "trsvcid": "$NVMF_PORT", 00:34:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.375 "hdgst": ${hdgst:-false}, 00:34:58.375 "ddgst": ${ddgst:-false} 00:34:58.375 }, 00:34:58.375 "method": "bdev_nvme_attach_controller" 00:34:58.375 } 00:34:58.375 EOF 00:34:58.375 )") 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:58.375 { 00:34:58.375 "params": { 00:34:58.375 "name": "Nvme$subsystem", 00:34:58.375 "trtype": "$TEST_TRANSPORT", 00:34:58.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.375 "adrfam": "ipv4", 00:34:58.375 "trsvcid": "$NVMF_PORT", 00:34:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.375 "hdgst": ${hdgst:-false}, 00:34:58.375 "ddgst": ${ddgst:-false} 00:34:58.375 }, 00:34:58.375 "method": "bdev_nvme_attach_controller" 00:34:58.375 } 00:34:58.375 EOF 00:34:58.375 )") 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:58.375 "params": { 00:34:58.375 "name": "Nvme0", 00:34:58.375 "trtype": "tcp", 00:34:58.375 "traddr": "10.0.0.2", 00:34:58.375 "adrfam": "ipv4", 00:34:58.375 "trsvcid": "4420", 00:34:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.375 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.375 "hdgst": false, 00:34:58.375 "ddgst": false 00:34:58.375 }, 00:34:58.375 "method": "bdev_nvme_attach_controller" 00:34:58.375 },{ 00:34:58.375 "params": { 00:34:58.375 "name": "Nvme1", 00:34:58.375 "trtype": "tcp", 00:34:58.375 "traddr": "10.0.0.2", 00:34:58.375 "adrfam": "ipv4", 00:34:58.375 "trsvcid": "4420", 00:34:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:58.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:58.375 "hdgst": false, 00:34:58.375 "ddgst": false 00:34:58.375 }, 00:34:58.375 "method": "bdev_nvme_attach_controller" 00:34:58.375 },{ 00:34:58.375 "params": { 00:34:58.375 "name": "Nvme2", 00:34:58.375 "trtype": "tcp", 00:34:58.375 "traddr": "10.0.0.2", 00:34:58.375 "adrfam": "ipv4", 00:34:58.375 "trsvcid": "4420", 00:34:58.375 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:58.375 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:58.375 "hdgst": false, 00:34:58.375 "ddgst": false 00:34:58.375 }, 00:34:58.375 "method": "bdev_nvme_attach_controller" 00:34:58.375 }' 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:58.375 09:20:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.375 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.375 ... 00:34:58.375 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.375 ... 00:34:58.375 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:58.375 ... 00:34:58.375 fio-3.35 00:34:58.375 Starting 24 threads 00:34:58.375 EAL: No free 2048 kB hugepages reported on node 1 00:35:10.579 00:35:10.579 filename0: (groupid=0, jobs=1): err= 0: pid=3938285: Wed Jul 24 09:20:47 2024 00:35:10.579 read: IOPS=52, BW=209KiB/s (214kB/s)(2112KiB/10119msec) 00:35:10.579 slat (usec): min=8, max=190, avg=72.49, stdev=25.84 00:35:10.579 clat (msec): min=124, max=514, avg=304.78, stdev=79.26 00:35:10.579 lat (msec): min=124, max=514, avg=304.85, stdev=79.27 00:35:10.579 clat percentiles (msec): 00:35:10.579 | 1.00th=[ 125], 5.00th=[ 146], 10.00th=[ 180], 20.00th=[ 236], 00:35:10.579 | 30.00th=[ 264], 40.00th=[ 309], 50.00th=[ 326], 60.00th=[ 338], 00:35:10.579 | 70.00th=[ 347], 80.00th=[ 363], 90.00th=[ 388], 95.00th=[ 405], 00:35:10.579 | 99.00th=[ 506], 99.50th=[ 506], 99.90th=[ 514], 99.95th=[ 514], 00:35:10.579 | 99.99th=[ 514] 00:35:10.579 bw ( KiB/s): min= 128, max= 384, per=3.63%, avg=204.80, stdev=74.07, samples=20 00:35:10.579 iops : min= 32, max= 96, avg=51.20, stdev=18.52, samples=20 00:35:10.579 lat (msec) : 250=24.81%, 500=74.05%, 750=1.14% 00:35:10.579 cpu : usr=97.88%, sys=1.35%, ctx=53, majf=0, minf=37 00:35:10.579 IO depths : 1=3.4%, 2=9.7%, 4=25.0%, 8=52.8%, 16=9.1%, 32=0.0%, >=64=0.0% 00:35:10.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.579 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.579 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.579 filename0: (groupid=0, jobs=1): err= 0: pid=3938286: Wed Jul 24 09:20:47 2024 00:35:10.579 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10085msec) 00:35:10.579 slat (usec): min=8, max=142, avg=22.15, stdev=16.92 00:35:10.579 clat (msec): min=213, max=467, avg=323.84, stdev=49.29 00:35:10.579 lat (msec): min=213, max=467, avg=323.87, stdev=49.28 00:35:10.579 clat percentiles (msec): 00:35:10.579 | 1.00th=[ 222], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 288], 00:35:10.579 | 30.00th=[ 305], 40.00th=[ 317], 50.00th=[ 330], 60.00th=[ 338], 00:35:10.579 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 401], 00:35:10.579 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 468], 99.95th=[ 468], 00:35:10.579 | 99.99th=[ 468] 00:35:10.579 bw ( KiB/s): min= 128, max= 256, per=3.40%, avg=192.00, stdev=64.21, samples=20 00:35:10.579 iops : min= 32, max= 64, avg=48.00, stdev=16.05, samples=20 00:35:10.579 lat (msec) : 250=10.48%, 500=89.52% 00:35:10.579 cpu : usr=98.02%, sys=1.43%, ctx=22, majf=0, minf=31 00:35:10.579 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:10.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.579 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.579 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.579 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.579 filename0: (groupid=0, jobs=1): err= 0: pid=3938287: Wed Jul 24 09:20:47 2024 00:35:10.579 read: IOPS=49, BW=198KiB/s (202kB/s)(1984KiB/10041msec) 00:35:10.579 slat (nsec): min=6417, max=84482, avg=19927.24, stdev=11789.40 00:35:10.579 clat (msec): min=132, max=473, avg=323.73, stdev=49.49 00:35:10.579 lat (msec): min=132, max=473, avg=323.75, stdev=49.49 00:35:10.579 clat percentiles (msec): 00:35:10.579 | 1.00th=[ 213], 5.00th=[ 241], 10.00th=[ 247], 20.00th=[ 288], 00:35:10.580 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 330], 60.00th=[ 338], 00:35:10.580 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 401], 00:35:10.580 | 99.00th=[ 405], 99.50th=[ 451], 99.90th=[ 472], 99.95th=[ 472], 00:35:10.580 | 99.99th=[ 472] 00:35:10.580 bw ( KiB/s): min= 128, max= 256, per=3.40%, avg=192.00, stdev=62.72, samples=20 00:35:10.580 iops : min= 32, max= 64, avg=48.00, stdev=15.68, samples=20 00:35:10.580 lat (msec) : 250=10.08%, 500=89.92% 00:35:10.580 cpu : usr=98.28%, sys=1.28%, ctx=20, majf=0, minf=34 00:35:10.580 IO depths : 1=3.0%, 2=9.3%, 4=25.0%, 8=53.2%, 16=9.5%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename0: (groupid=0, jobs=1): err= 0: pid=3938288: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=61, BW=247KiB/s (253kB/s)(2496KiB/10102msec) 00:35:10.580 slat (nsec): min=7543, max=66672, avg=20658.38, stdev=11442.78 00:35:10.580 clat (msec): min=175, max=405, avg=258.21, stdev=44.42 00:35:10.580 lat (msec): min=175, max=405, avg=258.23, stdev=44.42 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 176], 5.00th=[ 203], 10.00th=[ 218], 20.00th=[ 226], 00:35:10.580 | 30.00th=[ 230], 40.00th=[ 236], 50.00th=[ 249], 60.00th=[ 257], 00:35:10.580 | 70.00th=[ 262], 80.00th=[ 309], 90.00th=[ 338], 95.00th=[ 342], 00:35:10.580 | 99.00th=[ 347], 99.50th=[ 347], 99.90th=[ 405], 99.95th=[ 405], 00:35:10.580 | 99.99th=[ 405] 00:35:10.580 bw ( KiB/s): min= 128, max= 368, per=4.32%, avg=243.20, stdev=53.60, samples=20 00:35:10.580 iops : min= 32, max= 92, avg=60.80, stdev=13.40, samples=20 00:35:10.580 lat (msec) : 250=51.28%, 500=48.72% 00:35:10.580 cpu : usr=98.27%, sys=1.32%, ctx=24, majf=0, minf=35 00:35:10.580 IO depths : 1=4.0%, 2=8.8%, 4=20.7%, 8=58.0%, 16=8.5%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=92.8%, 8=1.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename0: (groupid=0, jobs=1): err= 0: pid=3938289: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=68, BW=273KiB/s (279kB/s)(2760KiB/10117msec) 00:35:10.580 slat (usec): min=5, max=199, avg=26.54, stdev=24.06 00:35:10.580 clat (msec): min=122, max=445, avg=233.96, stdev=41.84 00:35:10.580 lat (msec): min=122, max=445, avg=233.99, stdev=41.84 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 124], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 215], 00:35:10.580 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 234], 60.00th=[ 239], 00:35:10.580 | 70.00th=[ 251], 80.00th=[ 259], 90.00th=[ 275], 95.00th=[ 309], 00:35:10.580 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 447], 99.95th=[ 447], 00:35:10.580 | 99.99th=[ 447] 00:35:10.580 bw ( KiB/s): min= 176, max= 384, per=4.79%, avg=269.60, stdev=47.37, samples=20 00:35:10.580 iops : min= 44, max= 96, avg=67.40, stdev=11.84, samples=20 00:35:10.580 lat (msec) : 250=67.54%, 500=32.46% 00:35:10.580 cpu : usr=97.39%, sys=1.72%, ctx=69, majf=0, minf=32 00:35:10.580 IO depths : 1=2.2%, 2=6.5%, 4=19.1%, 8=61.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=92.4%, 8=2.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename0: (groupid=0, jobs=1): err= 0: pid=3938290: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=58, BW=234KiB/s (240kB/s)(2368KiB/10102msec) 00:35:10.580 slat (nsec): min=8374, max=99962, avg=25218.63, stdev=15597.84 00:35:10.580 clat (msec): min=144, max=429, avg=271.71, stdev=51.56 00:35:10.580 lat (msec): min=144, max=429, avg=271.74, stdev=51.56 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 163], 5.00th=[ 190], 10.00th=[ 222], 20.00th=[ 228], 00:35:10.580 | 30.00th=[ 239], 40.00th=[ 247], 50.00th=[ 259], 60.00th=[ 275], 00:35:10.580 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 347], 00:35:10.580 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 430], 99.95th=[ 430], 00:35:10.580 | 99.99th=[ 430] 00:35:10.580 bw ( KiB/s): min= 128, max= 384, per=4.09%, avg=230.40, stdev=62.60, samples=20 00:35:10.580 iops : min= 32, max= 96, avg=57.60, stdev=15.65, samples=20 00:35:10.580 lat (msec) : 250=40.20%, 500=59.80% 00:35:10.580 cpu : usr=98.19%, sys=1.38%, ctx=39, majf=0, minf=32 00:35:10.580 IO depths : 1=2.4%, 2=8.6%, 4=25.0%, 8=53.9%, 16=10.1%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename0: (groupid=0, jobs=1): err= 0: pid=3938291: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=53, BW=216KiB/s (221kB/s)(2176KiB/10093msec) 00:35:10.580 slat (nsec): min=7889, max=99841, avg=32724.17, stdev=21624.76 00:35:10.580 clat (msec): min=123, max=485, avg=296.56, stdev=67.56 00:35:10.580 lat (msec): min=123, max=485, avg=296.59, stdev=67.56 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 124], 5.00th=[ 213], 10.00th=[ 222], 20.00th=[ 236], 00:35:10.580 | 30.00th=[ 247], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 326], 00:35:10.580 | 70.00th=[ 338], 80.00th=[ 351], 90.00th=[ 376], 95.00th=[ 405], 00:35:10.580 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 485], 99.95th=[ 485], 00:35:10.580 | 99.99th=[ 485] 00:35:10.580 bw ( KiB/s): min= 128, max= 256, per=3.75%, avg=211.20, stdev=61.11, samples=20 00:35:10.580 iops : min= 32, max= 64, avg=52.80, stdev=15.28, samples=20 00:35:10.580 lat (msec) : 250=30.15%, 500=69.85% 00:35:10.580 cpu : usr=98.35%, sys=1.22%, ctx=18, majf=0, minf=29 00:35:10.580 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename0: (groupid=0, jobs=1): err= 0: pid=3938292: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=57, BW=229KiB/s (235kB/s)(2304KiB/10042msec) 00:35:10.580 slat (usec): min=8, max=108, avg=30.91, stdev=20.26 00:35:10.580 clat (msec): min=147, max=474, avg=278.68, stdev=53.33 00:35:10.580 lat (msec): min=147, max=474, avg=278.71, stdev=53.33 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 163], 5.00th=[ 199], 10.00th=[ 222], 20.00th=[ 230], 00:35:10.580 | 30.00th=[ 243], 40.00th=[ 257], 50.00th=[ 262], 60.00th=[ 305], 00:35:10.580 | 70.00th=[ 321], 80.00th=[ 338], 90.00th=[ 351], 95.00th=[ 355], 00:35:10.580 | 99.00th=[ 397], 99.50th=[ 435], 99.90th=[ 477], 99.95th=[ 477], 00:35:10.580 | 99.99th=[ 477] 00:35:10.580 bw ( KiB/s): min= 128, max= 368, per=3.97%, avg=224.00, stdev=64.63, samples=20 00:35:10.580 iops : min= 32, max= 92, avg=56.00, stdev=16.16, samples=20 00:35:10.580 lat (msec) : 250=36.11%, 500=63.89% 00:35:10.580 cpu : usr=98.23%, sys=1.33%, ctx=21, majf=0, minf=38 00:35:10.580 IO depths : 1=2.8%, 2=8.9%, 4=24.5%, 8=54.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename1: (groupid=0, jobs=1): err= 0: pid=3938293: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=57, BW=230KiB/s (236kB/s)(2328KiB/10102msec) 00:35:10.580 slat (usec): min=5, max=116, avg=31.30, stdev=20.55 00:35:10.580 clat (msec): min=143, max=472, avg=276.34, stdev=56.30 00:35:10.580 lat (msec): min=143, max=472, avg=276.38, stdev=56.31 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 163], 5.00th=[ 190], 10.00th=[ 218], 20.00th=[ 228], 00:35:10.580 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 264], 60.00th=[ 284], 00:35:10.580 | 70.00th=[ 326], 80.00th=[ 338], 90.00th=[ 342], 95.00th=[ 359], 00:35:10.580 | 99.00th=[ 439], 99.50th=[ 464], 99.90th=[ 472], 99.95th=[ 472], 00:35:10.580 | 99.99th=[ 472] 00:35:10.580 bw ( KiB/s): min= 128, max= 368, per=4.02%, avg=226.40, stdev=61.92, samples=20 00:35:10.580 iops : min= 32, max= 92, avg=56.60, stdev=15.48, samples=20 00:35:10.580 lat (msec) : 250=38.14%, 500=61.86% 00:35:10.580 cpu : usr=98.29%, sys=1.28%, ctx=20, majf=0, minf=39 00:35:10.580 IO depths : 1=2.6%, 2=7.9%, 4=22.2%, 8=57.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.580 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.580 filename1: (groupid=0, jobs=1): err= 0: pid=3938294: Wed Jul 24 09:20:47 2024 00:35:10.580 read: IOPS=67, BW=271KiB/s (277kB/s)(2736KiB/10102msec) 00:35:10.580 slat (usec): min=8, max=127, avg=23.24, stdev=17.52 00:35:10.580 clat (msec): min=107, max=474, avg=235.11, stdev=49.90 00:35:10.580 lat (msec): min=107, max=474, avg=235.14, stdev=49.91 00:35:10.580 clat percentiles (msec): 00:35:10.580 | 1.00th=[ 108], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 211], 00:35:10.580 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 230], 60.00th=[ 239], 00:35:10.580 | 70.00th=[ 247], 80.00th=[ 257], 90.00th=[ 275], 95.00th=[ 326], 00:35:10.580 | 99.00th=[ 405], 99.50th=[ 435], 99.90th=[ 477], 99.95th=[ 477], 00:35:10.580 | 99.99th=[ 477] 00:35:10.580 bw ( KiB/s): min= 128, max= 384, per=4.75%, avg=267.20, stdev=60.11, samples=20 00:35:10.580 iops : min= 32, max= 96, avg=66.80, stdev=15.03, samples=20 00:35:10.580 lat (msec) : 250=70.76%, 500=29.24% 00:35:10.580 cpu : usr=98.27%, sys=1.30%, ctx=23, majf=0, minf=40 00:35:10.580 IO depths : 1=2.6%, 2=7.6%, 4=21.1%, 8=58.8%, 16=9.9%, 32=0.0%, >=64=0.0% 00:35:10.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.580 issued rwts: total=684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename1: (groupid=0, jobs=1): err= 0: pid=3938295: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10091msec) 00:35:10.581 slat (nsec): min=8455, max=91530, avg=28852.89, stdev=19904.40 00:35:10.581 clat (msec): min=123, max=503, avg=315.08, stdev=67.15 00:35:10.581 lat (msec): min=123, max=503, avg=315.11, stdev=67.14 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 124], 5.00th=[ 178], 10.00th=[ 224], 20.00th=[ 259], 00:35:10.581 | 30.00th=[ 300], 40.00th=[ 313], 50.00th=[ 326], 60.00th=[ 342], 00:35:10.581 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 384], 95.00th=[ 384], 00:35:10.581 | 99.00th=[ 456], 99.50th=[ 498], 99.90th=[ 506], 99.95th=[ 506], 00:35:10.581 | 99.99th=[ 506] 00:35:10.581 bw ( KiB/s): min= 128, max= 256, per=3.52%, avg=198.40, stdev=65.33, samples=20 00:35:10.581 iops : min= 32, max= 64, avg=49.60, stdev=16.33, samples=20 00:35:10.581 lat (msec) : 250=17.19%, 500=82.42%, 750=0.39% 00:35:10.581 cpu : usr=98.29%, sys=1.29%, ctx=15, majf=0, minf=30 00:35:10.581 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename1: (groupid=0, jobs=1): err= 0: pid=3938296: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=50, BW=204KiB/s (209kB/s)(2048KiB/10055msec) 00:35:10.581 slat (usec): min=12, max=141, avg=78.21, stdev=19.97 00:35:10.581 clat (msec): min=98, max=490, avg=313.53, stdev=64.01 00:35:10.581 lat (msec): min=98, max=490, avg=313.61, stdev=64.02 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 116], 5.00th=[ 199], 10.00th=[ 236], 20.00th=[ 257], 00:35:10.581 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 330], 60.00th=[ 338], 00:35:10.581 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 384], 95.00th=[ 401], 00:35:10.581 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 489], 99.95th=[ 489], 00:35:10.581 | 99.99th=[ 489] 00:35:10.581 bw ( KiB/s): min= 128, max= 256, per=3.52%, avg=198.40, stdev=65.33, samples=20 00:35:10.581 iops : min= 32, max= 64, avg=49.60, stdev=16.33, samples=20 00:35:10.581 lat (msec) : 100=0.39%, 250=16.41%, 500=83.20% 00:35:10.581 cpu : usr=97.69%, sys=1.47%, ctx=56, majf=0, minf=34 00:35:10.581 IO depths : 1=4.5%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename1: (groupid=0, jobs=1): err= 0: pid=3938297: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10087msec) 00:35:10.581 slat (usec): min=7, max=143, avg=22.33, stdev=12.73 00:35:10.581 clat (msec): min=146, max=458, avg=315.00, stdev=59.93 00:35:10.581 lat (msec): min=146, max=458, avg=315.02, stdev=59.93 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 146], 5.00th=[ 190], 10.00th=[ 236], 20.00th=[ 257], 00:35:10.581 | 30.00th=[ 300], 40.00th=[ 317], 50.00th=[ 326], 60.00th=[ 338], 00:35:10.581 | 70.00th=[ 351], 80.00th=[ 359], 90.00th=[ 388], 95.00th=[ 397], 00:35:10.581 | 99.00th=[ 414], 99.50th=[ 443], 99.90th=[ 460], 99.95th=[ 460], 00:35:10.581 | 99.99th=[ 460] 00:35:10.581 bw ( KiB/s): min= 128, max= 384, per=3.52%, avg=198.40, stdev=77.42, samples=20 00:35:10.581 iops : min= 32, max= 96, avg=49.60, stdev=19.35, samples=20 00:35:10.581 lat (msec) : 250=16.80%, 500=83.20% 00:35:10.581 cpu : usr=97.92%, sys=1.32%, ctx=64, majf=0, minf=38 00:35:10.581 IO depths : 1=4.1%, 2=10.4%, 4=25.0%, 8=52.1%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename1: (groupid=0, jobs=1): err= 0: pid=3938298: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=65, BW=262KiB/s (268kB/s)(2648KiB/10102msec) 00:35:10.581 slat (nsec): min=8091, max=82358, avg=19764.59, stdev=15446.87 00:35:10.581 clat (msec): min=144, max=440, avg=242.94, stdev=37.11 00:35:10.581 lat (msec): min=144, max=440, avg=242.96, stdev=37.11 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 146], 5.00th=[ 190], 10.00th=[ 213], 20.00th=[ 222], 00:35:10.581 | 30.00th=[ 226], 40.00th=[ 230], 50.00th=[ 239], 60.00th=[ 245], 00:35:10.581 | 70.00th=[ 253], 80.00th=[ 262], 90.00th=[ 300], 95.00th=[ 326], 00:35:10.581 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 443], 99.95th=[ 443], 00:35:10.581 | 99.99th=[ 443] 00:35:10.581 bw ( KiB/s): min= 128, max= 384, per=4.59%, avg=258.40, stdev=56.46, samples=20 00:35:10.581 iops : min= 32, max= 96, avg=64.60, stdev=14.11, samples=20 00:35:10.581 lat (msec) : 250=64.35%, 500=35.65% 00:35:10.581 cpu : usr=98.40%, sys=1.17%, ctx=41, majf=0, minf=43 00:35:10.581 IO depths : 1=1.4%, 2=4.4%, 4=15.3%, 8=67.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=91.3%, 8=3.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename1: (groupid=0, jobs=1): err= 0: pid=3938299: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=72, BW=289KiB/s (295kB/s)(2920KiB/10119msec) 00:35:10.581 slat (nsec): min=7142, max=92371, avg=14756.16, stdev=12854.97 00:35:10.581 clat (msec): min=124, max=380, avg=220.58, stdev=51.86 00:35:10.581 lat (msec): min=124, max=380, avg=220.60, stdev=51.86 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 125], 5.00th=[ 136], 10.00th=[ 144], 20.00th=[ 174], 00:35:10.581 | 30.00th=[ 209], 40.00th=[ 220], 50.00th=[ 226], 60.00th=[ 232], 00:35:10.581 | 70.00th=[ 239], 80.00th=[ 253], 90.00th=[ 271], 95.00th=[ 338], 00:35:10.581 | 99.00th=[ 359], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:35:10.581 | 99.99th=[ 380] 00:35:10.581 bw ( KiB/s): min= 176, max= 384, per=5.07%, avg=285.60, stdev=59.02, samples=20 00:35:10.581 iops : min= 44, max= 96, avg=71.40, stdev=14.76, samples=20 00:35:10.581 lat (msec) : 250=77.81%, 500=22.19% 00:35:10.581 cpu : usr=98.11%, sys=1.35%, ctx=53, majf=0, minf=24 00:35:10.581 IO depths : 1=0.7%, 2=3.6%, 4=14.5%, 8=69.2%, 16=12.1%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename1: (groupid=0, jobs=1): err= 0: pid=3938300: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=57, BW=228KiB/s (234kB/s)(2304KiB/10088msec) 00:35:10.581 slat (nsec): min=8233, max=82751, avg=22600.97, stdev=11027.02 00:35:10.581 clat (msec): min=134, max=499, avg=278.90, stdev=62.19 00:35:10.581 lat (msec): min=134, max=499, avg=278.92, stdev=62.19 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 146], 5.00th=[ 190], 10.00th=[ 211], 20.00th=[ 228], 00:35:10.581 | 30.00th=[ 236], 40.00th=[ 245], 50.00th=[ 257], 60.00th=[ 296], 00:35:10.581 | 70.00th=[ 334], 80.00th=[ 342], 90.00th=[ 355], 95.00th=[ 363], 00:35:10.581 | 99.00th=[ 426], 99.50th=[ 464], 99.90th=[ 502], 99.95th=[ 502], 00:35:10.581 | 99.99th=[ 502] 00:35:10.581 bw ( KiB/s): min= 128, max= 368, per=3.97%, avg=224.00, stdev=66.48, samples=20 00:35:10.581 iops : min= 32, max= 92, avg=56.00, stdev=16.62, samples=20 00:35:10.581 lat (msec) : 250=44.44%, 500=55.56% 00:35:10.581 cpu : usr=98.17%, sys=1.40%, ctx=12, majf=0, minf=32 00:35:10.581 IO depths : 1=2.8%, 2=9.0%, 4=25.0%, 8=53.5%, 16=9.7%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename2: (groupid=0, jobs=1): err= 0: pid=3938301: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=68, BW=273KiB/s (280kB/s)(2760KiB/10102msec) 00:35:10.581 slat (usec): min=7, max=104, avg=18.84, stdev=13.89 00:35:10.581 clat (msec): min=107, max=450, avg=233.01, stdev=40.98 00:35:10.581 lat (msec): min=107, max=450, avg=233.03, stdev=40.98 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 108], 5.00th=[ 155], 10.00th=[ 190], 20.00th=[ 215], 00:35:10.581 | 30.00th=[ 222], 40.00th=[ 228], 50.00th=[ 234], 60.00th=[ 241], 00:35:10.581 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 275], 95.00th=[ 305], 00:35:10.581 | 99.00th=[ 363], 99.50th=[ 363], 99.90th=[ 451], 99.95th=[ 451], 00:35:10.581 | 99.99th=[ 451] 00:35:10.581 bw ( KiB/s): min= 240, max= 368, per=4.79%, avg=269.60, stdev=29.49, samples=20 00:35:10.581 iops : min= 60, max= 92, avg=67.40, stdev= 7.37, samples=20 00:35:10.581 lat (msec) : 250=73.33%, 500=26.67% 00:35:10.581 cpu : usr=98.21%, sys=1.34%, ctx=19, majf=0, minf=37 00:35:10.581 IO depths : 1=1.3%, 2=3.6%, 4=13.0%, 8=70.7%, 16=11.3%, 32=0.0%, >=64=0.0% 00:35:10.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 complete : 0=0.0%, 4=90.6%, 8=3.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.581 issued rwts: total=690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.581 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.581 filename2: (groupid=0, jobs=1): err= 0: pid=3938302: Wed Jul 24 09:20:47 2024 00:35:10.581 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10095msec) 00:35:10.581 slat (usec): min=8, max=258, avg=55.33, stdev=31.11 00:35:10.581 clat (msec): min=123, max=515, avg=314.84, stdev=52.18 00:35:10.581 lat (msec): min=123, max=515, avg=314.90, stdev=52.17 00:35:10.581 clat percentiles (msec): 00:35:10.581 | 1.00th=[ 220], 5.00th=[ 230], 10.00th=[ 241], 20.00th=[ 259], 00:35:10.581 | 30.00th=[ 296], 40.00th=[ 309], 50.00th=[ 317], 60.00th=[ 338], 00:35:10.581 | 70.00th=[ 342], 80.00th=[ 355], 90.00th=[ 380], 95.00th=[ 384], 00:35:10.581 | 99.00th=[ 439], 99.50th=[ 468], 99.90th=[ 514], 99.95th=[ 514], 00:35:10.581 | 99.99th=[ 514] 00:35:10.582 bw ( KiB/s): min= 128, max= 256, per=3.52%, avg=198.40, stdev=63.87, samples=20 00:35:10.582 iops : min= 32, max= 64, avg=49.60, stdev=15.97, samples=20 00:35:10.582 lat (msec) : 250=13.67%, 500=85.94%, 750=0.39% 00:35:10.582 cpu : usr=97.34%, sys=1.67%, ctx=73, majf=0, minf=39 00:35:10.582 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 filename2: (groupid=0, jobs=1): err= 0: pid=3938303: Wed Jul 24 09:20:47 2024 00:35:10.582 read: IOPS=53, BW=215KiB/s (220kB/s)(2168KiB/10093msec) 00:35:10.582 slat (nsec): min=6508, max=87327, avg=24545.82, stdev=11085.09 00:35:10.582 clat (msec): min=123, max=503, avg=297.53, stdev=68.55 00:35:10.582 lat (msec): min=123, max=503, avg=297.56, stdev=68.55 00:35:10.582 clat percentiles (msec): 00:35:10.582 | 1.00th=[ 124], 5.00th=[ 197], 10.00th=[ 222], 20.00th=[ 234], 00:35:10.582 | 30.00th=[ 257], 40.00th=[ 288], 50.00th=[ 300], 60.00th=[ 326], 00:35:10.582 | 70.00th=[ 334], 80.00th=[ 351], 90.00th=[ 388], 95.00th=[ 405], 00:35:10.582 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 506], 99.95th=[ 506], 00:35:10.582 | 99.99th=[ 506] 00:35:10.582 bw ( KiB/s): min= 128, max= 256, per=3.74%, avg=210.40, stdev=60.60, samples=20 00:35:10.582 iops : min= 32, max= 64, avg=52.60, stdev=15.15, samples=20 00:35:10.582 lat (msec) : 250=29.15%, 500=70.48%, 750=0.37% 00:35:10.582 cpu : usr=97.18%, sys=1.61%, ctx=146, majf=0, minf=42 00:35:10.582 IO depths : 1=4.8%, 2=11.1%, 4=25.1%, 8=51.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 filename2: (groupid=0, jobs=1): err= 0: pid=3938304: Wed Jul 24 09:20:47 2024 00:35:10.582 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10123msec) 00:35:10.582 slat (usec): min=8, max=283, avg=30.67, stdev=24.31 00:35:10.582 clat (msec): min=122, max=457, avg=259.96, stdev=61.20 00:35:10.582 lat (msec): min=122, max=457, avg=259.99, stdev=61.21 00:35:10.582 clat percentiles (msec): 00:35:10.582 | 1.00th=[ 124], 5.00th=[ 125], 10.00th=[ 174], 20.00th=[ 230], 00:35:10.582 | 30.00th=[ 236], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 262], 00:35:10.582 | 70.00th=[ 292], 80.00th=[ 317], 90.00th=[ 342], 95.00th=[ 355], 00:35:10.582 | 99.00th=[ 359], 99.50th=[ 443], 99.90th=[ 456], 99.95th=[ 456], 00:35:10.582 | 99.99th=[ 456] 00:35:10.582 bw ( KiB/s): min= 128, max= 384, per=4.31%, avg=242.40, stdev=55.49, samples=20 00:35:10.582 iops : min= 32, max= 96, avg=60.60, stdev=13.87, samples=20 00:35:10.582 lat (msec) : 250=43.41%, 500=56.59% 00:35:10.582 cpu : usr=97.27%, sys=1.84%, ctx=39, majf=0, minf=36 00:35:10.582 IO depths : 1=4.3%, 2=10.6%, 4=25.1%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 filename2: (groupid=0, jobs=1): err= 0: pid=3938305: Wed Jul 24 09:20:47 2024 00:35:10.582 read: IOPS=75, BW=302KiB/s (309kB/s)(3056KiB/10121msec) 00:35:10.582 slat (usec): min=4, max=100, avg=20.60, stdev=20.23 00:35:10.582 clat (msec): min=3, max=389, avg=211.43, stdev=71.63 00:35:10.582 lat (msec): min=3, max=389, avg=211.45, stdev=71.63 00:35:10.582 clat percentiles (msec): 00:35:10.582 | 1.00th=[ 4], 5.00th=[ 22], 10.00th=[ 70], 20.00th=[ 209], 00:35:10.582 | 30.00th=[ 218], 40.00th=[ 226], 50.00th=[ 228], 60.00th=[ 236], 00:35:10.582 | 70.00th=[ 241], 80.00th=[ 251], 90.00th=[ 264], 95.00th=[ 288], 00:35:10.582 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 388], 99.95th=[ 388], 00:35:10.582 | 99.99th=[ 388] 00:35:10.582 bw ( KiB/s): min= 240, max= 896, per=5.32%, avg=299.20, stdev=143.31, samples=20 00:35:10.582 iops : min= 60, max= 224, avg=74.80, stdev=35.83, samples=20 00:35:10.582 lat (msec) : 4=4.19%, 10=0.26%, 50=3.93%, 100=2.09%, 250=67.28% 00:35:10.582 lat (msec) : 500=22.25% 00:35:10.582 cpu : usr=98.27%, sys=1.27%, ctx=22, majf=0, minf=34 00:35:10.582 IO depths : 1=0.7%, 2=4.6%, 4=17.8%, 8=65.1%, 16=11.9%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=92.2%, 8=2.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 filename2: (groupid=0, jobs=1): err= 0: pid=3938306: Wed Jul 24 09:20:47 2024 00:35:10.582 read: IOPS=52, BW=210KiB/s (215kB/s)(2112KiB/10045msec) 00:35:10.582 slat (usec): min=5, max=114, avg=50.56, stdev=27.92 00:35:10.582 clat (msec): min=144, max=473, avg=303.97, stdev=55.44 00:35:10.582 lat (msec): min=144, max=473, avg=304.02, stdev=55.45 00:35:10.582 clat percentiles (msec): 00:35:10.582 | 1.00th=[ 188], 5.00th=[ 228], 10.00th=[ 239], 20.00th=[ 245], 00:35:10.582 | 30.00th=[ 259], 40.00th=[ 305], 50.00th=[ 321], 60.00th=[ 334], 00:35:10.582 | 70.00th=[ 342], 80.00th=[ 347], 90.00th=[ 368], 95.00th=[ 372], 00:35:10.582 | 99.00th=[ 447], 99.50th=[ 460], 99.90th=[ 472], 99.95th=[ 472], 00:35:10.582 | 99.99th=[ 472] 00:35:10.582 bw ( KiB/s): min= 128, max= 256, per=3.63%, avg=204.80, stdev=62.85, samples=20 00:35:10.582 iops : min= 32, max= 64, avg=51.20, stdev=15.71, samples=20 00:35:10.582 lat (msec) : 250=24.24%, 500=75.76% 00:35:10.582 cpu : usr=97.90%, sys=1.46%, ctx=35, majf=0, minf=28 00:35:10.582 IO depths : 1=4.4%, 2=10.4%, 4=24.4%, 8=52.7%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 filename2: (groupid=0, jobs=1): err= 0: pid=3938307: Wed Jul 24 09:20:47 2024 00:35:10.582 read: IOPS=65, BW=261KiB/s (267kB/s)(2624KiB/10068msec) 00:35:10.582 slat (usec): min=4, max=191, avg=23.94, stdev=20.84 00:35:10.582 clat (msec): min=121, max=362, avg=245.34, stdev=48.31 00:35:10.582 lat (msec): min=121, max=362, avg=245.37, stdev=48.31 00:35:10.582 clat percentiles (msec): 00:35:10.582 | 1.00th=[ 123], 5.00th=[ 182], 10.00th=[ 190], 20.00th=[ 224], 00:35:10.582 | 30.00th=[ 228], 40.00th=[ 234], 50.00th=[ 243], 60.00th=[ 247], 00:35:10.582 | 70.00th=[ 255], 80.00th=[ 262], 90.00th=[ 309], 95.00th=[ 351], 00:35:10.582 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:35:10.582 | 99.99th=[ 363] 00:35:10.582 bw ( KiB/s): min= 128, max= 384, per=4.56%, avg=256.00, stdev=70.61, samples=20 00:35:10.582 iops : min= 32, max= 96, avg=64.00, stdev=17.65, samples=20 00:35:10.582 lat (msec) : 250=63.72%, 500=36.28% 00:35:10.582 cpu : usr=97.83%, sys=1.50%, ctx=75, majf=0, minf=25 00:35:10.582 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 filename2: (groupid=0, jobs=1): err= 0: pid=3938308: Wed Jul 24 09:20:47 2024 00:35:10.582 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10056msec) 00:35:10.582 slat (usec): min=8, max=106, avg=23.77, stdev=13.63 00:35:10.582 clat (msec): min=151, max=476, avg=271.55, stdev=52.49 00:35:10.582 lat (msec): min=151, max=476, avg=271.57, stdev=52.49 00:35:10.582 clat percentiles (msec): 00:35:10.582 | 1.00th=[ 153], 5.00th=[ 190], 10.00th=[ 222], 20.00th=[ 230], 00:35:10.582 | 30.00th=[ 239], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 275], 00:35:10.582 | 70.00th=[ 313], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 347], 00:35:10.582 | 99.00th=[ 359], 99.50th=[ 443], 99.90th=[ 477], 99.95th=[ 477], 00:35:10.582 | 99.99th=[ 477] 00:35:10.582 bw ( KiB/s): min= 128, max= 384, per=4.09%, avg=230.40, stdev=62.60, samples=20 00:35:10.582 iops : min= 32, max= 96, avg=57.60, stdev=15.65, samples=20 00:35:10.582 lat (msec) : 250=37.84%, 500=62.16% 00:35:10.582 cpu : usr=97.24%, sys=1.88%, ctx=47, majf=0, minf=35 00:35:10.582 IO depths : 1=4.1%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:10.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.582 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.582 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:10.582 00:35:10.582 Run status group 0 (all jobs): 00:35:10.582 READ: bw=5620KiB/s (5755kB/s), 197KiB/s-302KiB/s (201kB/s-309kB/s), io=55.6MiB (58.3MB), run=10041-10123msec 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:10.582 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 bdev_null0 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 [2024-07-24 09:20:47.390499] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 bdev_null1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.583 { 00:35:10.583 "params": { 00:35:10.583 "name": "Nvme$subsystem", 00:35:10.583 "trtype": "$TEST_TRANSPORT", 00:35:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.583 "adrfam": "ipv4", 00:35:10.583 "trsvcid": "$NVMF_PORT", 00:35:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.583 "hdgst": ${hdgst:-false}, 00:35:10.583 "ddgst": ${ddgst:-false} 00:35:10.583 }, 00:35:10.583 "method": "bdev_nvme_attach_controller" 00:35:10.583 } 00:35:10.583 EOF 00:35:10.583 )") 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:10.583 { 00:35:10.583 "params": { 00:35:10.583 "name": "Nvme$subsystem", 00:35:10.583 "trtype": "$TEST_TRANSPORT", 00:35:10.583 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:10.583 "adrfam": "ipv4", 00:35:10.583 "trsvcid": "$NVMF_PORT", 00:35:10.583 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:10.583 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:10.583 "hdgst": ${hdgst:-false}, 00:35:10.583 "ddgst": ${ddgst:-false} 00:35:10.583 }, 00:35:10.583 "method": "bdev_nvme_attach_controller" 00:35:10.583 } 00:35:10.583 EOF 00:35:10.583 )") 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:10.583 09:20:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:10.583 "params": { 00:35:10.583 "name": "Nvme0", 00:35:10.583 "trtype": "tcp", 00:35:10.583 "traddr": "10.0.0.2", 00:35:10.584 "adrfam": "ipv4", 00:35:10.584 "trsvcid": "4420", 00:35:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:10.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:10.584 "hdgst": false, 00:35:10.584 "ddgst": false 00:35:10.584 }, 00:35:10.584 "method": "bdev_nvme_attach_controller" 00:35:10.584 },{ 00:35:10.584 "params": { 00:35:10.584 "name": "Nvme1", 00:35:10.584 "trtype": "tcp", 00:35:10.584 "traddr": "10.0.0.2", 00:35:10.584 "adrfam": "ipv4", 00:35:10.584 "trsvcid": "4420", 00:35:10.584 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:10.584 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:10.584 "hdgst": false, 00:35:10.584 "ddgst": false 00:35:10.584 }, 00:35:10.584 "method": "bdev_nvme_attach_controller" 00:35:10.584 }' 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:10.584 09:20:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:10.584 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.584 ... 00:35:10.584 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:10.584 ... 00:35:10.584 fio-3.35 00:35:10.584 Starting 4 threads 00:35:10.584 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.845 00:35:15.845 filename0: (groupid=0, jobs=1): err= 0: pid=3939604: Wed Jul 24 09:20:53 2024 00:35:15.845 read: IOPS=1696, BW=13.3MiB/s (13.9MB/s)(66.3MiB/5002msec) 00:35:15.845 slat (nsec): min=3940, max=66302, avg=21739.36, stdev=10127.52 00:35:15.845 clat (usec): min=940, max=8617, avg=4630.11, stdev=440.10 00:35:15.845 lat (usec): min=949, max=8645, avg=4651.85, stdev=440.63 00:35:15.845 clat percentiles (usec): 00:35:15.845 | 1.00th=[ 3032], 5.00th=[ 4178], 10.00th=[ 4359], 20.00th=[ 4424], 00:35:15.845 | 30.00th=[ 4555], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:35:15.845 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5014], 00:35:15.845 | 99.00th=[ 5800], 99.50th=[ 7046], 99.90th=[ 8094], 99.95th=[ 8160], 00:35:15.845 | 99.99th=[ 8586] 00:35:15.845 bw ( KiB/s): min=13392, max=13824, per=25.16%, avg=13573.90, stdev=150.61, samples=10 00:35:15.845 iops : min= 1674, max= 1728, avg=1696.70, stdev=18.82, samples=10 00:35:15.845 lat (usec) : 1000=0.02% 00:35:15.845 lat (msec) : 2=0.35%, 4=2.44%, 10=97.18% 00:35:15.845 cpu : usr=93.62%, sys=5.34%, ctx=13, majf=0, minf=9 00:35:15.845 IO depths : 1=1.0%, 2=20.1%, 4=53.8%, 8=25.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 issued rwts: total=8488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.845 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:15.845 filename0: (groupid=0, jobs=1): err= 0: pid=3939605: Wed Jul 24 09:20:53 2024 00:35:15.845 read: IOPS=1690, BW=13.2MiB/s (13.8MB/s)(66.1MiB/5002msec) 00:35:15.845 slat (nsec): min=4308, max=66258, avg=20291.36, stdev=9525.05 00:35:15.845 clat (usec): min=982, max=9696, avg=4660.81, stdev=398.11 00:35:15.845 lat (usec): min=1000, max=9709, avg=4681.10, stdev=398.40 00:35:15.845 clat percentiles (usec): 00:35:15.845 | 1.00th=[ 3687], 5.00th=[ 4293], 10.00th=[ 4359], 20.00th=[ 4490], 00:35:15.845 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4686], 00:35:15.845 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5014], 00:35:15.845 | 99.00th=[ 5800], 99.50th=[ 6783], 99.90th=[ 8160], 99.95th=[ 9634], 00:35:15.845 | 99.99th=[ 9634] 00:35:15.845 bw ( KiB/s): min=13232, max=13872, per=25.05%, avg=13517.90, stdev=177.19, samples=10 00:35:15.845 iops : min= 1654, max= 1734, avg=1689.70, stdev=22.13, samples=10 00:35:15.845 lat (usec) : 1000=0.01% 00:35:15.845 lat (msec) : 2=0.21%, 4=1.90%, 10=97.87% 00:35:15.845 cpu : usr=94.14%, sys=4.88%, ctx=16, majf=0, minf=0 00:35:15.845 IO depths : 1=1.0%, 2=16.7%, 4=57.3%, 8=24.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 issued rwts: total=8455,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.845 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:15.845 filename1: (groupid=0, jobs=1): err= 0: pid=3939606: Wed Jul 24 09:20:53 2024 00:35:15.845 read: IOPS=1686, BW=13.2MiB/s (13.8MB/s)(65.9MiB/5003msec) 00:35:15.845 slat (nsec): min=3736, max=66003, avg=22366.52, stdev=8088.73 00:35:15.845 clat (usec): min=945, max=10972, avg=4659.86, stdev=509.27 00:35:15.845 lat (usec): min=963, max=11001, avg=4682.23, stdev=509.35 00:35:15.845 clat percentiles (usec): 00:35:15.845 | 1.00th=[ 3228], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:35:15.845 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4621], 60.00th=[ 4686], 00:35:15.845 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5014], 00:35:15.845 | 99.00th=[ 6718], 99.50th=[ 7701], 99.90th=[ 8455], 99.95th=[10421], 00:35:15.845 | 99.99th=[10945] 00:35:15.845 bw ( KiB/s): min=13312, max=13824, per=24.99%, avg=13484.80, stdev=147.76, samples=10 00:35:15.845 iops : min= 1664, max= 1728, avg=1685.60, stdev=18.47, samples=10 00:35:15.845 lat (usec) : 1000=0.02% 00:35:15.845 lat (msec) : 2=0.53%, 4=1.65%, 10=97.70%, 20=0.09% 00:35:15.845 cpu : usr=94.46%, sys=4.86%, ctx=17, majf=0, minf=9 00:35:15.845 IO depths : 1=1.1%, 2=21.5%, 4=52.7%, 8=24.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 issued rwts: total=8436,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.845 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:15.845 filename1: (groupid=0, jobs=1): err= 0: pid=3939607: Wed Jul 24 09:20:53 2024 00:35:15.845 read: IOPS=1671, BW=13.1MiB/s (13.7MB/s)(65.3MiB/5002msec) 00:35:15.845 slat (usec): min=4, max=194, avg=21.75, stdev=10.78 00:35:15.845 clat (usec): min=931, max=12595, avg=4701.32, stdev=710.81 00:35:15.845 lat (usec): min=945, max=12607, avg=4723.07, stdev=710.65 00:35:15.845 clat percentiles (usec): 00:35:15.845 | 1.00th=[ 1729], 5.00th=[ 4146], 10.00th=[ 4359], 20.00th=[ 4490], 00:35:15.845 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:35:15.845 | 70.00th=[ 4752], 80.00th=[ 4883], 90.00th=[ 5014], 95.00th=[ 5473], 00:35:15.845 | 99.00th=[ 7898], 99.50th=[ 8291], 99.90th=[ 8848], 99.95th=[ 8848], 00:35:15.845 | 99.99th=[12649] 00:35:15.845 bw ( KiB/s): min=13056, max=13824, per=24.80%, avg=13379.20, stdev=221.29, samples=10 00:35:15.845 iops : min= 1632, max= 1728, avg=1672.40, stdev=27.66, samples=10 00:35:15.845 lat (usec) : 1000=0.07% 00:35:15.845 lat (msec) : 2=1.17%, 4=2.28%, 10=96.46%, 20=0.01% 00:35:15.845 cpu : usr=95.10%, sys=4.36%, ctx=10, majf=0, minf=9 00:35:15.845 IO depths : 1=0.1%, 2=16.8%, 4=56.4%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.845 issued rwts: total=8363,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.845 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:15.845 00:35:15.845 Run status group 0 (all jobs): 00:35:15.845 READ: bw=52.7MiB/s (55.2MB/s), 13.1MiB/s-13.3MiB/s (13.7MB/s-13.9MB/s), io=264MiB (276MB), run=5002-5003msec 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:15.845 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.846 00:35:15.846 real 0m24.216s 00:35:15.846 user 4m33.364s 00:35:15.846 sys 0m6.627s 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 ************************************ 00:35:15.846 END TEST fio_dif_rand_params 00:35:15.846 ************************************ 00:35:15.846 09:20:53 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:15.846 09:20:53 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:15.846 09:20:53 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 ************************************ 00:35:15.846 START TEST fio_dif_digest 00:35:15.846 ************************************ 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 bdev_null0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:15.846 [2024-07-24 09:20:53.655365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.846 { 00:35:15.846 "params": { 00:35:15.846 "name": "Nvme$subsystem", 00:35:15.846 "trtype": "$TEST_TRANSPORT", 00:35:15.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.846 "adrfam": "ipv4", 00:35:15.846 "trsvcid": "$NVMF_PORT", 00:35:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.846 "hdgst": ${hdgst:-false}, 00:35:15.846 "ddgst": ${ddgst:-false} 00:35:15.846 }, 00:35:15.846 "method": "bdev_nvme_attach_controller" 00:35:15.846 } 00:35:15.846 EOF 00:35:15.846 )") 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:15.846 "params": { 00:35:15.846 "name": "Nvme0", 00:35:15.846 "trtype": "tcp", 00:35:15.846 "traddr": "10.0.0.2", 00:35:15.846 "adrfam": "ipv4", 00:35:15.846 "trsvcid": "4420", 00:35:15.846 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.846 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.846 "hdgst": true, 00:35:15.846 "ddgst": true 00:35:15.846 }, 00:35:15.846 "method": "bdev_nvme_attach_controller" 00:35:15.846 }' 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:15.846 09:20:53 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.846 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:15.846 ... 00:35:15.846 fio-3.35 00:35:15.846 Starting 3 threads 00:35:15.846 EAL: No free 2048 kB hugepages reported on node 1 00:35:28.076 00:35:28.076 filename0: (groupid=0, jobs=1): err= 0: pid=3940444: Wed Jul 24 09:21:04 2024 00:35:28.076 read: IOPS=204, BW=25.5MiB/s (26.8MB/s)(257MiB/10048msec) 00:35:28.076 slat (nsec): min=4963, max=51563, avg=15632.60, stdev=5074.83 00:35:28.076 clat (usec): min=8385, max=54425, avg=14648.42, stdev=1621.98 00:35:28.076 lat (usec): min=8407, max=54438, avg=14664.05, stdev=1622.01 00:35:28.076 clat percentiles (usec): 00:35:28.076 | 1.00th=[11863], 5.00th=[12780], 10.00th=[13304], 20.00th=[13698], 00:35:28.076 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:35:28.076 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16057], 95.00th=[16581], 00:35:28.076 | 99.00th=[17171], 99.50th=[17695], 99.90th=[18220], 99.95th=[47973], 00:35:28.076 | 99.99th=[54264] 00:35:28.076 bw ( KiB/s): min=25088, max=26880, per=34.05%, avg=26240.00, stdev=554.06, samples=20 00:35:28.076 iops : min= 196, max= 210, avg=205.00, stdev= 4.33, samples=20 00:35:28.076 lat (msec) : 10=0.39%, 20=99.51%, 50=0.05%, 100=0.05% 00:35:28.076 cpu : usr=91.53%, sys=8.00%, ctx=32, majf=0, minf=107 00:35:28.076 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.076 issued rwts: total=2052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.076 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.076 filename0: (groupid=0, jobs=1): err= 0: pid=3940445: Wed Jul 24 09:21:04 2024 00:35:28.076 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10048msec) 00:35:28.076 slat (nsec): min=7269, max=40362, avg=15255.51, stdev=4645.73 00:35:28.076 clat (usec): min=9631, max=48261, avg=15082.65, stdev=1531.31 00:35:28.076 lat (usec): min=9659, max=48277, avg=15097.90, stdev=1531.12 00:35:28.076 clat percentiles (usec): 00:35:28.076 | 1.00th=[12256], 5.00th=[13304], 10.00th=[13698], 20.00th=[14091], 00:35:28.076 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:35:28.076 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:35:28.076 | 99.00th=[17695], 99.50th=[18220], 99.90th=[47973], 99.95th=[48497], 00:35:28.076 | 99.99th=[48497] 00:35:28.076 bw ( KiB/s): min=24320, max=26368, per=33.08%, avg=25487.35, stdev=555.82, samples=20 00:35:28.076 iops : min= 190, max= 206, avg=199.10, stdev= 4.33, samples=20 00:35:28.076 lat (msec) : 10=0.05%, 20=99.85%, 50=0.10% 00:35:28.076 cpu : usr=91.82%, sys=7.71%, ctx=30, majf=0, minf=133 00:35:28.076 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.076 issued rwts: total=1993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.076 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.076 filename0: (groupid=0, jobs=1): err= 0: pid=3940446: Wed Jul 24 09:21:04 2024 00:35:28.076 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(251MiB/10046msec) 00:35:28.076 slat (usec): min=7, max=103, avg=19.16, stdev= 7.42 00:35:28.076 clat (usec): min=10858, max=59195, avg=14993.70, stdev=2317.55 00:35:28.076 lat (usec): min=10883, max=59215, avg=15012.86, stdev=2317.35 00:35:28.076 clat percentiles (usec): 00:35:28.076 | 1.00th=[12387], 5.00th=[13173], 10.00th=[13435], 20.00th=[13960], 00:35:28.076 | 30.00th=[14353], 40.00th=[14615], 50.00th=[14877], 60.00th=[15139], 00:35:28.076 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16450], 95.00th=[16909], 00:35:28.076 | 99.00th=[17695], 99.50th=[18220], 99.90th=[58983], 99.95th=[58983], 00:35:28.076 | 99.99th=[58983] 00:35:28.076 bw ( KiB/s): min=23040, max=27136, per=33.25%, avg=25625.60, stdev=946.64, samples=20 00:35:28.076 iops : min= 180, max= 212, avg=200.20, stdev= 7.40, samples=20 00:35:28.076 lat (msec) : 20=99.75%, 50=0.05%, 100=0.20% 00:35:28.076 cpu : usr=90.12%, sys=8.32%, ctx=288, majf=0, minf=240 00:35:28.076 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:28.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:28.076 issued rwts: total=2004,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:28.076 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:28.076 00:35:28.076 Run status group 0 (all jobs): 00:35:28.076 READ: bw=75.3MiB/s (78.9MB/s), 24.8MiB/s-25.5MiB/s (26.0MB/s-26.8MB/s), io=756MiB (793MB), run=10046-10048msec 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:28.076 00:35:28.076 real 0m11.173s 00:35:28.076 user 0m28.594s 00:35:28.076 sys 0m2.694s 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:28.076 09:21:04 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:28.076 ************************************ 00:35:28.076 END TEST fio_dif_digest 00:35:28.076 ************************************ 00:35:28.076 09:21:04 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:28.076 09:21:04 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:28.076 rmmod nvme_tcp 00:35:28.076 rmmod nvme_fabrics 00:35:28.076 rmmod nvme_keyring 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3934401 ']' 00:35:28.076 09:21:04 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3934401 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3934401 ']' 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3934401 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3934401 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:28.076 09:21:04 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3934401' 00:35:28.077 killing process with pid 3934401 00:35:28.077 09:21:04 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3934401 00:35:28.077 09:21:04 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3934401 00:35:28.077 09:21:05 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:28.077 09:21:05 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:28.077 Waiting for block devices as requested 00:35:28.334 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:28.334 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:28.334 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:28.592 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:28.592 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:28.592 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:28.592 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:28.850 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:28.850 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:28.850 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:29.108 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:29.109 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:29.109 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:29.109 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:29.368 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:29.368 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:29.368 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:29.626 09:21:07 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:29.626 09:21:07 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:29.626 09:21:07 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:29.626 09:21:07 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:29.626 09:21:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:29.626 09:21:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:29.626 09:21:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.525 09:21:09 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:31.525 00:35:31.525 real 1m6.476s 00:35:31.525 user 6m27.038s 00:35:31.525 sys 0m19.396s 00:35:31.525 09:21:09 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:31.525 09:21:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:31.525 ************************************ 00:35:31.525 END TEST nvmf_dif 00:35:31.525 ************************************ 00:35:31.525 09:21:09 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:31.525 09:21:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:31.525 09:21:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:31.525 09:21:09 -- common/autotest_common.sh@10 -- # set +x 00:35:31.525 ************************************ 00:35:31.525 START TEST nvmf_abort_qd_sizes 00:35:31.525 ************************************ 00:35:31.525 09:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:31.525 * Looking for test storage... 00:35:31.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.783 09:21:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:31.784 09:21:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:35:33.684 Found 0000:09:00.0 (0x8086 - 0x159b) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:35:33.684 Found 0000:09:00.1 (0x8086 - 0x159b) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:33.684 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:35:33.685 Found net devices under 0000:09:00.0: cvl_0_0 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:35:33.685 Found net devices under 0000:09:00.1: cvl_0_1 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:33.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:33.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:35:33.685 00:35:33.685 --- 10.0.0.2 ping statistics --- 00:35:33.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.685 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:33.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:33.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:35:33.685 00:35:33.685 --- 10.0.0.1 ping statistics --- 00:35:33.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:33.685 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:33.685 09:21:11 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:35.060 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:35.060 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:35.060 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:35.995 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3945847 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3945847 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3945847 ']' 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:35.995 09:21:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:35.995 [2024-07-24 09:21:14.014868] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:35:35.995 [2024-07-24 09:21:14.014949] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.995 EAL: No free 2048 kB hugepages reported on node 1 00:35:35.995 [2024-07-24 09:21:14.054923] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:35:35.995 [2024-07-24 09:21:14.081939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:36.253 [2024-07-24 09:21:14.166989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:36.253 [2024-07-24 09:21:14.167042] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:36.253 [2024-07-24 09:21:14.167069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:36.253 [2024-07-24 09:21:14.167080] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:36.253 [2024-07-24 09:21:14.167090] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:36.253 [2024-07-24 09:21:14.167195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.253 [2024-07-24 09:21:14.167291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:36.253 [2024-07-24 09:21:14.167349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:36.253 [2024-07-24 09:21:14.167351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:36.253 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:36.253 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:35:36.253 09:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:36.253 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:36.253 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.253 09:21:14 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:0b:00.0 ]] 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:0b:00.0 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:36.254 09:21:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:36.254 ************************************ 00:35:36.254 START TEST spdk_target_abort 00:35:36.254 ************************************ 00:35:36.254 09:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:35:36.254 09:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:36.254 09:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:35:36.254 09:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:36.254 09:21:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 spdk_targetn1 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 [2024-07-24 09:21:17.173032] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:39.534 [2024-07-24 09:21:17.206606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:39.534 09:21:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:39.534 EAL: No free 2048 kB hugepages reported on node 1 00:35:42.812 Initializing NVMe Controllers 00:35:42.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:42.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:42.812 Initialization complete. Launching workers. 00:35:42.812 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11368, failed: 0 00:35:42.812 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1442, failed to submit 9926 00:35:42.812 success 767, unsuccess 675, failed 0 00:35:42.812 09:21:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:42.812 09:21:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:42.812 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.090 Initializing NVMe Controllers 00:35:46.090 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:46.090 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:46.090 Initialization complete. Launching workers. 00:35:46.090 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8843, failed: 0 00:35:46.090 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1198, failed to submit 7645 00:35:46.090 success 321, unsuccess 877, failed 0 00:35:46.090 09:21:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:46.090 09:21:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:46.090 EAL: No free 2048 kB hugepages reported on node 1 00:35:49.368 Initializing NVMe Controllers 00:35:49.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:49.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:49.368 Initialization complete. Launching workers. 00:35:49.368 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31460, failed: 0 00:35:49.368 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2795, failed to submit 28665 00:35:49.368 success 560, unsuccess 2235, failed 0 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:49.368 09:21:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3945847 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3945847 ']' 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3945847 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3945847 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3945847' 00:35:50.299 killing process with pid 3945847 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3945847 00:35:50.299 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3945847 00:35:50.557 00:35:50.557 real 0m14.180s 00:35:50.557 user 0m52.621s 00:35:50.557 sys 0m2.895s 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:50.557 ************************************ 00:35:50.557 END TEST spdk_target_abort 00:35:50.557 ************************************ 00:35:50.557 09:21:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:50.557 09:21:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:50.557 09:21:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:50.557 09:21:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:50.557 ************************************ 00:35:50.557 START TEST kernel_target_abort 00:35:50.557 ************************************ 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:50.557 09:21:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:51.490 Waiting for block devices as requested 00:35:51.748 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:51.748 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:51.748 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:51.748 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:52.007 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:52.007 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:52.007 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:52.007 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:52.297 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:52.297 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:52.297 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:52.561 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:52.561 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:52.561 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:52.561 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:52.819 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:52.819 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:52.819 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:35:52.819 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:52.819 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:35:52.819 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:35:52.819 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:52.820 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:35:52.820 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:35:52.820 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:35:52.820 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:53.077 No valid GPT data, bailing 00:35:53.077 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:53.077 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:35:53.077 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:35:53.077 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:35:53.077 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:35:53.078 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:53.078 09:21:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:35:53.078 00:35:53.078 Discovery Log Number of Records 2, Generation counter 2 00:35:53.078 =====Discovery Log Entry 0====== 00:35:53.078 trtype: tcp 00:35:53.078 adrfam: ipv4 00:35:53.078 subtype: current discovery subsystem 00:35:53.078 treq: not specified, sq flow control disable supported 00:35:53.078 portid: 1 00:35:53.078 trsvcid: 4420 00:35:53.078 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:53.078 traddr: 10.0.0.1 00:35:53.078 eflags: none 00:35:53.078 sectype: none 00:35:53.078 =====Discovery Log Entry 1====== 00:35:53.078 trtype: tcp 00:35:53.078 adrfam: ipv4 00:35:53.078 subtype: nvme subsystem 00:35:53.078 treq: not specified, sq flow control disable supported 00:35:53.078 portid: 1 00:35:53.078 trsvcid: 4420 00:35:53.078 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:53.078 traddr: 10.0.0.1 00:35:53.078 eflags: none 00:35:53.078 sectype: none 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:53.078 09:21:31 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:53.078 EAL: No free 2048 kB hugepages reported on node 1 00:35:56.353 Initializing NVMe Controllers 00:35:56.353 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:56.353 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:56.353 Initialization complete. Launching workers. 00:35:56.353 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35574, failed: 0 00:35:56.353 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35574, failed to submit 0 00:35:56.353 success 0, unsuccess 35574, failed 0 00:35:56.353 09:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:56.353 09:21:34 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:56.353 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.628 Initializing NVMe Controllers 00:35:59.628 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:59.628 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:59.628 Initialization complete. Launching workers. 00:35:59.628 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 69572, failed: 0 00:35:59.628 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17566, failed to submit 52006 00:35:59.628 success 0, unsuccess 17566, failed 0 00:35:59.628 09:21:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:59.628 09:21:37 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:59.628 EAL: No free 2048 kB hugepages reported on node 1 00:36:02.903 Initializing NVMe Controllers 00:36:02.903 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:02.903 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:02.903 Initialization complete. Launching workers. 00:36:02.903 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67758, failed: 0 00:36:02.903 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16934, failed to submit 50824 00:36:02.903 success 0, unsuccess 16934, failed 0 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:02.903 09:21:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:03.835 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:03.835 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:03.835 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:04.768 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:04.768 00:36:04.768 real 0m14.241s 00:36:04.768 user 0m5.603s 00:36:04.768 sys 0m3.272s 00:36:04.768 09:21:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:04.768 09:21:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:04.768 ************************************ 00:36:04.768 END TEST kernel_target_abort 00:36:04.768 ************************************ 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:04.768 rmmod nvme_tcp 00:36:04.768 rmmod nvme_fabrics 00:36:04.768 rmmod nvme_keyring 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3945847 ']' 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3945847 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3945847 ']' 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3945847 00:36:04.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3945847) - No such process 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3945847 is not found' 00:36:04.768 Process with pid 3945847 is not found 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:04.768 09:21:42 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:06.138 Waiting for block devices as requested 00:36:06.138 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:06.138 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:06.138 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:06.138 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:06.138 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:06.396 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:06.396 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:06.396 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:06.396 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:06.654 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:06.654 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:06.912 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:06.912 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:06.912 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:06.912 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:07.169 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:07.169 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:07.169 09:21:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:09.699 09:21:47 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:09.699 00:36:09.699 real 0m37.697s 00:36:09.699 user 1m0.270s 00:36:09.699 sys 0m9.436s 00:36:09.700 09:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:09.700 09:21:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:09.700 ************************************ 00:36:09.700 END TEST nvmf_abort_qd_sizes 00:36:09.700 ************************************ 00:36:09.700 09:21:47 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:09.700 09:21:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:09.700 09:21:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:09.700 09:21:47 -- common/autotest_common.sh@10 -- # set +x 00:36:09.700 ************************************ 00:36:09.700 START TEST keyring_file 00:36:09.700 ************************************ 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:09.700 * Looking for test storage... 00:36:09.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:09.700 09:21:47 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:09.700 09:21:47 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:09.700 09:21:47 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:09.700 09:21:47 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.700 09:21:47 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.700 09:21:47 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.700 09:21:47 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:09.700 09:21:47 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gp3wRirb7M 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gp3wRirb7M 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gp3wRirb7M 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gp3wRirb7M 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fkdhISneye 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:09.700 09:21:47 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fkdhISneye 00:36:09.700 09:21:47 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fkdhISneye 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.fkdhISneye 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@30 -- # tgtpid=3951614 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:09.700 09:21:47 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3951614 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3951614 ']' 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:09.700 09:21:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:09.700 [2024-07-24 09:21:47.560053] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:36:09.700 [2024-07-24 09:21:47.560175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951614 ] 00:36:09.700 EAL: No free 2048 kB hugepages reported on node 1 00:36:09.700 [2024-07-24 09:21:47.594930] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:09.700 [2024-07-24 09:21:47.638286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.700 [2024-07-24 09:21:47.719487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:09.958 09:21:47 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:09.959 09:21:47 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:09.959 09:21:47 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:09.959 09:21:47 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.959 09:21:47 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:09.959 [2024-07-24 09:21:47.987748] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:09.959 null0 00:36:09.959 [2024-07-24 09:21:48.019783] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:09.959 [2024-07-24 09:21:48.020241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:09.959 [2024-07-24 09:21:48.027780] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:09.959 09:21:48 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:09.959 [2024-07-24 09:21:48.039802] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:09.959 request: 00:36:09.959 { 00:36:09.959 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.959 "secure_channel": false, 00:36:09.959 "listen_address": { 00:36:09.959 "trtype": "tcp", 00:36:09.959 "traddr": "127.0.0.1", 00:36:09.959 "trsvcid": "4420" 00:36:09.959 }, 00:36:09.959 "method": "nvmf_subsystem_add_listener", 00:36:09.959 "req_id": 1 00:36:09.959 } 00:36:09.959 Got JSON-RPC error response 00:36:09.959 response: 00:36:09.959 { 00:36:09.959 "code": -32602, 00:36:09.959 "message": "Invalid parameters" 00:36:09.959 } 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:09.959 09:21:48 keyring_file -- keyring/file.sh@46 -- # bperfpid=3951622 00:36:09.959 09:21:48 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3951622 /var/tmp/bperf.sock 00:36:09.959 09:21:48 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3951622 ']' 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:09.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:09.959 09:21:48 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:10.217 [2024-07-24 09:21:48.087900] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:36:10.217 [2024-07-24 09:21:48.087971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3951622 ] 00:36:10.217 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.217 [2024-07-24 09:21:48.118798] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:10.217 [2024-07-24 09:21:48.146206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:10.217 [2024-07-24 09:21:48.232466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:10.474 09:21:48 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:10.474 09:21:48 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:10.474 09:21:48 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:10.474 09:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:10.732 09:21:48 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fkdhISneye 00:36:10.732 09:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fkdhISneye 00:36:10.990 09:21:48 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:10.990 09:21:48 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:10.990 09:21:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.990 09:21:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.990 09:21:48 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.990 09:21:49 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.gp3wRirb7M == \/\t\m\p\/\t\m\p\.\g\p\3\w\R\i\r\b\7\M ]] 00:36:10.990 09:21:49 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:10.990 09:21:49 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:10.990 09:21:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.990 09:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.990 09:21:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:11.247 09:21:49 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.fkdhISneye == \/\t\m\p\/\t\m\p\.\f\k\d\h\I\S\n\e\y\e ]] 00:36:11.247 09:21:49 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:11.247 09:21:49 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:11.247 09:21:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:11.247 09:21:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.247 09:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.247 09:21:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:11.505 09:21:49 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:11.505 09:21:49 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:11.505 09:21:49 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:11.505 09:21:49 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:11.505 09:21:49 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.505 09:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.505 09:21:49 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:11.762 09:21:49 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:11.762 09:21:49 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.762 09:21:49 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:12.019 [2024-07-24 09:21:50.066100] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:12.277 nvme0n1 00:36:12.277 09:21:50 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:12.277 09:21:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.277 09:21:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.277 09:21:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.277 09:21:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.277 09:21:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.535 09:21:50 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:12.535 09:21:50 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:12.535 09:21:50 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:12.535 09:21:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.535 09:21:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.535 09:21:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.535 09:21:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:12.793 09:21:50 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:12.793 09:21:50 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:12.793 Running I/O for 1 seconds... 00:36:13.726 00:36:13.726 Latency(us) 00:36:13.726 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.726 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:13.726 nvme0n1 : 1.02 6285.23 24.55 0.00 0.00 20147.19 3713.71 23981.32 00:36:13.726 =================================================================================================================== 00:36:13.726 Total : 6285.23 24.55 0.00 0.00 20147.19 3713.71 23981.32 00:36:13.726 0 00:36:13.726 09:21:51 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:13.726 09:21:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:13.983 09:21:52 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:13.983 09:21:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:13.983 09:21:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:13.983 09:21:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:13.983 09:21:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:13.983 09:21:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.240 09:21:52 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:14.240 09:21:52 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:14.240 09:21:52 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:14.240 09:21:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.240 09:21:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.240 09:21:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:14.240 09:21:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.498 09:21:52 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:14.498 09:21:52 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:14.498 09:21:52 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:14.498 09:21:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:14.756 [2024-07-24 09:21:52.772762] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:14.756 [2024-07-24 09:21:52.773352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ce4e0 (107): Transport endpoint is not connected 00:36:14.756 [2024-07-24 09:21:52.774341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26ce4e0 (9): Bad file descriptor 00:36:14.756 [2024-07-24 09:21:52.775341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:14.756 [2024-07-24 09:21:52.775361] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:14.756 [2024-07-24 09:21:52.775376] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:14.756 request: 00:36:14.756 { 00:36:14.756 "name": "nvme0", 00:36:14.756 "trtype": "tcp", 00:36:14.756 "traddr": "127.0.0.1", 00:36:14.756 "adrfam": "ipv4", 00:36:14.756 "trsvcid": "4420", 00:36:14.756 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.756 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.756 "prchk_reftag": false, 00:36:14.756 "prchk_guard": false, 00:36:14.756 "hdgst": false, 00:36:14.756 "ddgst": false, 00:36:14.756 "psk": "key1", 00:36:14.756 "method": "bdev_nvme_attach_controller", 00:36:14.756 "req_id": 1 00:36:14.756 } 00:36:14.756 Got JSON-RPC error response 00:36:14.756 response: 00:36:14.756 { 00:36:14.756 "code": -5, 00:36:14.756 "message": "Input/output error" 00:36:14.756 } 00:36:14.756 09:21:52 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:14.756 09:21:52 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:14.756 09:21:52 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:14.756 09:21:52 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:14.756 09:21:52 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:14.756 09:21:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:14.756 09:21:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:14.756 09:21:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:14.756 09:21:52 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:14.756 09:21:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.014 09:21:53 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:15.014 09:21:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:15.014 09:21:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.014 09:21:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.014 09:21:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.014 09:21:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.014 09:21:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:15.271 09:21:53 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:15.271 09:21:53 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:15.271 09:21:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:15.532 09:21:53 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:15.532 09:21:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:15.823 09:21:53 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:15.823 09:21:53 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:15.823 09:21:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.084 09:21:54 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:16.084 09:21:54 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.gp3wRirb7M 00:36:16.084 09:21:54 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:16.084 09:21:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:16.084 09:21:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:16.341 [2024-07-24 09:21:54.269564] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gp3wRirb7M': 0100660 00:36:16.341 [2024-07-24 09:21:54.269612] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:16.341 request: 00:36:16.341 { 00:36:16.341 "name": "key0", 00:36:16.341 "path": "/tmp/tmp.gp3wRirb7M", 00:36:16.341 "method": "keyring_file_add_key", 00:36:16.341 "req_id": 1 00:36:16.341 } 00:36:16.341 Got JSON-RPC error response 00:36:16.341 response: 00:36:16.341 { 00:36:16.341 "code": -1, 00:36:16.341 "message": "Operation not permitted" 00:36:16.341 } 00:36:16.341 09:21:54 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:16.341 09:21:54 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:16.341 09:21:54 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:16.341 09:21:54 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:16.341 09:21:54 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.gp3wRirb7M 00:36:16.341 09:21:54 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:16.341 09:21:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gp3wRirb7M 00:36:16.599 09:21:54 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.gp3wRirb7M 00:36:16.599 09:21:54 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:16.599 09:21:54 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:16.599 09:21:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:16.599 09:21:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:16.599 09:21:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:16.599 09:21:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:16.857 09:21:54 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:16.857 09:21:54 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:16.857 09:21:54 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:16.857 09:21:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.114 [2024-07-24 09:21:55.023590] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gp3wRirb7M': No such file or directory 00:36:17.115 [2024-07-24 09:21:55.023629] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:17.115 [2024-07-24 09:21:55.023671] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:17.115 [2024-07-24 09:21:55.023685] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:17.115 [2024-07-24 09:21:55.023700] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:17.115 request: 00:36:17.115 { 00:36:17.115 "name": "nvme0", 00:36:17.115 "trtype": "tcp", 00:36:17.115 "traddr": "127.0.0.1", 00:36:17.115 "adrfam": "ipv4", 00:36:17.115 "trsvcid": "4420", 00:36:17.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:17.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:17.115 "prchk_reftag": false, 00:36:17.115 "prchk_guard": false, 00:36:17.115 "hdgst": false, 00:36:17.115 "ddgst": false, 00:36:17.115 "psk": "key0", 00:36:17.115 "method": "bdev_nvme_attach_controller", 00:36:17.115 "req_id": 1 00:36:17.115 } 00:36:17.115 Got JSON-RPC error response 00:36:17.115 response: 00:36:17.115 { 00:36:17.115 "code": -19, 00:36:17.115 "message": "No such device" 00:36:17.115 } 00:36:17.115 09:21:55 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:17.115 09:21:55 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:17.115 09:21:55 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:17.115 09:21:55 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:17.115 09:21:55 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:17.115 09:21:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:17.372 09:21:55 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TK8triHfkK 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:17.372 09:21:55 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:17.372 09:21:55 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:17.372 09:21:55 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:17.372 09:21:55 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:17.372 09:21:55 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:17.372 09:21:55 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:17.372 09:21:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TK8triHfkK 00:36:17.373 09:21:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TK8triHfkK 00:36:17.373 09:21:55 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.TK8triHfkK 00:36:17.373 09:21:55 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TK8triHfkK 00:36:17.373 09:21:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TK8triHfkK 00:36:17.630 09:21:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.630 09:21:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:17.888 nvme0n1 00:36:17.888 09:21:55 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:17.888 09:21:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:17.888 09:21:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:17.888 09:21:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:17.888 09:21:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:17.888 09:21:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.145 09:21:56 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:18.145 09:21:56 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:18.145 09:21:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:18.405 09:21:56 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:18.405 09:21:56 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:18.405 09:21:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.405 09:21:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.405 09:21:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.662 09:21:56 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:18.662 09:21:56 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:18.662 09:21:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:18.662 09:21:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:18.662 09:21:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.662 09:21:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.662 09:21:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:18.920 09:21:56 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:18.920 09:21:56 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:18.920 09:21:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:19.176 09:21:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:19.176 09:21:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:19.176 09:21:57 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:19.434 09:21:57 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:19.434 09:21:57 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TK8triHfkK 00:36:19.434 09:21:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TK8triHfkK 00:36:19.691 09:21:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.fkdhISneye 00:36:19.691 09:21:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.fkdhISneye 00:36:19.948 09:21:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:19.948 09:21:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:20.206 nvme0n1 00:36:20.206 09:21:58 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:20.206 09:21:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:20.465 09:21:58 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:20.465 "subsystems": [ 00:36:20.465 { 00:36:20.465 "subsystem": "keyring", 00:36:20.465 "config": [ 00:36:20.465 { 00:36:20.465 "method": "keyring_file_add_key", 00:36:20.465 "params": { 00:36:20.465 "name": "key0", 00:36:20.465 "path": "/tmp/tmp.TK8triHfkK" 00:36:20.465 } 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "method": "keyring_file_add_key", 00:36:20.465 "params": { 00:36:20.465 "name": "key1", 00:36:20.465 "path": "/tmp/tmp.fkdhISneye" 00:36:20.465 } 00:36:20.465 } 00:36:20.465 ] 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "subsystem": "iobuf", 00:36:20.465 "config": [ 00:36:20.465 { 00:36:20.465 "method": "iobuf_set_options", 00:36:20.465 "params": { 00:36:20.465 "small_pool_count": 8192, 00:36:20.465 "large_pool_count": 1024, 00:36:20.465 "small_bufsize": 8192, 00:36:20.465 "large_bufsize": 135168 00:36:20.465 } 00:36:20.465 } 00:36:20.465 ] 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "subsystem": "sock", 00:36:20.465 "config": [ 00:36:20.465 { 00:36:20.465 "method": "sock_set_default_impl", 00:36:20.465 "params": { 00:36:20.465 "impl_name": "posix" 00:36:20.465 } 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "method": "sock_impl_set_options", 00:36:20.465 "params": { 00:36:20.465 "impl_name": "ssl", 00:36:20.465 "recv_buf_size": 4096, 00:36:20.465 "send_buf_size": 4096, 00:36:20.465 "enable_recv_pipe": true, 00:36:20.465 "enable_quickack": false, 00:36:20.465 "enable_placement_id": 0, 00:36:20.465 "enable_zerocopy_send_server": true, 00:36:20.465 "enable_zerocopy_send_client": false, 00:36:20.465 "zerocopy_threshold": 0, 00:36:20.465 "tls_version": 0, 00:36:20.465 "enable_ktls": false 00:36:20.465 } 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "method": "sock_impl_set_options", 00:36:20.465 "params": { 00:36:20.465 "impl_name": "posix", 00:36:20.465 "recv_buf_size": 2097152, 00:36:20.465 "send_buf_size": 2097152, 00:36:20.465 "enable_recv_pipe": true, 00:36:20.465 "enable_quickack": false, 00:36:20.465 "enable_placement_id": 0, 00:36:20.465 "enable_zerocopy_send_server": true, 00:36:20.465 "enable_zerocopy_send_client": false, 00:36:20.465 "zerocopy_threshold": 0, 00:36:20.465 "tls_version": 0, 00:36:20.465 "enable_ktls": false 00:36:20.465 } 00:36:20.465 } 00:36:20.465 ] 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "subsystem": "vmd", 00:36:20.465 "config": [] 00:36:20.465 }, 00:36:20.465 { 00:36:20.465 "subsystem": "accel", 00:36:20.465 "config": [ 00:36:20.465 { 00:36:20.465 "method": "accel_set_options", 00:36:20.465 "params": { 00:36:20.465 "small_cache_size": 128, 00:36:20.466 "large_cache_size": 16, 00:36:20.466 "task_count": 2048, 00:36:20.466 "sequence_count": 2048, 00:36:20.466 "buf_count": 2048 00:36:20.466 } 00:36:20.466 } 00:36:20.466 ] 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "subsystem": "bdev", 00:36:20.466 "config": [ 00:36:20.466 { 00:36:20.466 "method": "bdev_set_options", 00:36:20.466 "params": { 00:36:20.466 "bdev_io_pool_size": 65535, 00:36:20.466 "bdev_io_cache_size": 256, 00:36:20.466 "bdev_auto_examine": true, 00:36:20.466 "iobuf_small_cache_size": 128, 00:36:20.466 "iobuf_large_cache_size": 16 00:36:20.466 } 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "method": "bdev_raid_set_options", 00:36:20.466 "params": { 00:36:20.466 "process_window_size_kb": 1024, 00:36:20.466 "process_max_bandwidth_mb_sec": 0 00:36:20.466 } 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "method": "bdev_iscsi_set_options", 00:36:20.466 "params": { 00:36:20.466 "timeout_sec": 30 00:36:20.466 } 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "method": "bdev_nvme_set_options", 00:36:20.466 "params": { 00:36:20.466 "action_on_timeout": "none", 00:36:20.466 "timeout_us": 0, 00:36:20.466 "timeout_admin_us": 0, 00:36:20.466 "keep_alive_timeout_ms": 10000, 00:36:20.466 "arbitration_burst": 0, 00:36:20.466 "low_priority_weight": 0, 00:36:20.466 "medium_priority_weight": 0, 00:36:20.466 "high_priority_weight": 0, 00:36:20.466 "nvme_adminq_poll_period_us": 10000, 00:36:20.466 "nvme_ioq_poll_period_us": 0, 00:36:20.466 "io_queue_requests": 512, 00:36:20.466 "delay_cmd_submit": true, 00:36:20.466 "transport_retry_count": 4, 00:36:20.466 "bdev_retry_count": 3, 00:36:20.466 "transport_ack_timeout": 0, 00:36:20.466 "ctrlr_loss_timeout_sec": 0, 00:36:20.466 "reconnect_delay_sec": 0, 00:36:20.466 "fast_io_fail_timeout_sec": 0, 00:36:20.466 "disable_auto_failback": false, 00:36:20.466 "generate_uuids": false, 00:36:20.466 "transport_tos": 0, 00:36:20.466 "nvme_error_stat": false, 00:36:20.466 "rdma_srq_size": 0, 00:36:20.466 "io_path_stat": false, 00:36:20.466 "allow_accel_sequence": false, 00:36:20.466 "rdma_max_cq_size": 0, 00:36:20.466 "rdma_cm_event_timeout_ms": 0, 00:36:20.466 "dhchap_digests": [ 00:36:20.466 "sha256", 00:36:20.466 "sha384", 00:36:20.466 "sha512" 00:36:20.466 ], 00:36:20.466 "dhchap_dhgroups": [ 00:36:20.466 "null", 00:36:20.466 "ffdhe2048", 00:36:20.466 "ffdhe3072", 00:36:20.466 "ffdhe4096", 00:36:20.466 "ffdhe6144", 00:36:20.466 "ffdhe8192" 00:36:20.466 ] 00:36:20.466 } 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "method": "bdev_nvme_attach_controller", 00:36:20.466 "params": { 00:36:20.466 "name": "nvme0", 00:36:20.466 "trtype": "TCP", 00:36:20.466 "adrfam": "IPv4", 00:36:20.466 "traddr": "127.0.0.1", 00:36:20.466 "trsvcid": "4420", 00:36:20.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.466 "prchk_reftag": false, 00:36:20.466 "prchk_guard": false, 00:36:20.466 "ctrlr_loss_timeout_sec": 0, 00:36:20.466 "reconnect_delay_sec": 0, 00:36:20.466 "fast_io_fail_timeout_sec": 0, 00:36:20.466 "psk": "key0", 00:36:20.466 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.466 "hdgst": false, 00:36:20.466 "ddgst": false 00:36:20.466 } 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "method": "bdev_nvme_set_hotplug", 00:36:20.466 "params": { 00:36:20.466 "period_us": 100000, 00:36:20.466 "enable": false 00:36:20.466 } 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "method": "bdev_wait_for_examine" 00:36:20.466 } 00:36:20.466 ] 00:36:20.466 }, 00:36:20.466 { 00:36:20.466 "subsystem": "nbd", 00:36:20.466 "config": [] 00:36:20.466 } 00:36:20.466 ] 00:36:20.466 }' 00:36:20.466 09:21:58 keyring_file -- keyring/file.sh@114 -- # killprocess 3951622 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3951622 ']' 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3951622 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3951622 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3951622' 00:36:20.466 killing process with pid 3951622 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@967 -- # kill 3951622 00:36:20.466 Received shutdown signal, test time was about 1.000000 seconds 00:36:20.466 00:36:20.466 Latency(us) 00:36:20.466 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.466 =================================================================================================================== 00:36:20.466 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.466 09:21:58 keyring_file -- common/autotest_common.sh@972 -- # wait 3951622 00:36:20.729 09:21:58 keyring_file -- keyring/file.sh@117 -- # bperfpid=3953078 00:36:20.729 09:21:58 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3953078 /var/tmp/bperf.sock 00:36:20.729 09:21:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3953078 ']' 00:36:20.729 09:21:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:20.729 09:21:58 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:20.729 09:21:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:20.729 09:21:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:20.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:20.729 09:21:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:20.729 09:21:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:20.729 09:21:58 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:20.729 "subsystems": [ 00:36:20.729 { 00:36:20.729 "subsystem": "keyring", 00:36:20.729 "config": [ 00:36:20.729 { 00:36:20.729 "method": "keyring_file_add_key", 00:36:20.730 "params": { 00:36:20.730 "name": "key0", 00:36:20.730 "path": "/tmp/tmp.TK8triHfkK" 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "keyring_file_add_key", 00:36:20.730 "params": { 00:36:20.730 "name": "key1", 00:36:20.730 "path": "/tmp/tmp.fkdhISneye" 00:36:20.730 } 00:36:20.730 } 00:36:20.730 ] 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "subsystem": "iobuf", 00:36:20.730 "config": [ 00:36:20.730 { 00:36:20.730 "method": "iobuf_set_options", 00:36:20.730 "params": { 00:36:20.730 "small_pool_count": 8192, 00:36:20.730 "large_pool_count": 1024, 00:36:20.730 "small_bufsize": 8192, 00:36:20.730 "large_bufsize": 135168 00:36:20.730 } 00:36:20.730 } 00:36:20.730 ] 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "subsystem": "sock", 00:36:20.730 "config": [ 00:36:20.730 { 00:36:20.730 "method": "sock_set_default_impl", 00:36:20.730 "params": { 00:36:20.730 "impl_name": "posix" 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "sock_impl_set_options", 00:36:20.730 "params": { 00:36:20.730 "impl_name": "ssl", 00:36:20.730 "recv_buf_size": 4096, 00:36:20.730 "send_buf_size": 4096, 00:36:20.730 "enable_recv_pipe": true, 00:36:20.730 "enable_quickack": false, 00:36:20.730 "enable_placement_id": 0, 00:36:20.730 "enable_zerocopy_send_server": true, 00:36:20.730 "enable_zerocopy_send_client": false, 00:36:20.730 "zerocopy_threshold": 0, 00:36:20.730 "tls_version": 0, 00:36:20.730 "enable_ktls": false 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "sock_impl_set_options", 00:36:20.730 "params": { 00:36:20.730 "impl_name": "posix", 00:36:20.730 "recv_buf_size": 2097152, 00:36:20.730 "send_buf_size": 2097152, 00:36:20.730 "enable_recv_pipe": true, 00:36:20.730 "enable_quickack": false, 00:36:20.730 "enable_placement_id": 0, 00:36:20.730 "enable_zerocopy_send_server": true, 00:36:20.730 "enable_zerocopy_send_client": false, 00:36:20.730 "zerocopy_threshold": 0, 00:36:20.730 "tls_version": 0, 00:36:20.730 "enable_ktls": false 00:36:20.730 } 00:36:20.730 } 00:36:20.730 ] 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "subsystem": "vmd", 00:36:20.730 "config": [] 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "subsystem": "accel", 00:36:20.730 "config": [ 00:36:20.730 { 00:36:20.730 "method": "accel_set_options", 00:36:20.730 "params": { 00:36:20.730 "small_cache_size": 128, 00:36:20.730 "large_cache_size": 16, 00:36:20.730 "task_count": 2048, 00:36:20.730 "sequence_count": 2048, 00:36:20.730 "buf_count": 2048 00:36:20.730 } 00:36:20.730 } 00:36:20.730 ] 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "subsystem": "bdev", 00:36:20.730 "config": [ 00:36:20.730 { 00:36:20.730 "method": "bdev_set_options", 00:36:20.730 "params": { 00:36:20.730 "bdev_io_pool_size": 65535, 00:36:20.730 "bdev_io_cache_size": 256, 00:36:20.730 "bdev_auto_examine": true, 00:36:20.730 "iobuf_small_cache_size": 128, 00:36:20.730 "iobuf_large_cache_size": 16 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "bdev_raid_set_options", 00:36:20.730 "params": { 00:36:20.730 "process_window_size_kb": 1024, 00:36:20.730 "process_max_bandwidth_mb_sec": 0 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "bdev_iscsi_set_options", 00:36:20.730 "params": { 00:36:20.730 "timeout_sec": 30 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "bdev_nvme_set_options", 00:36:20.730 "params": { 00:36:20.730 "action_on_timeout": "none", 00:36:20.730 "timeout_us": 0, 00:36:20.730 "timeout_admin_us": 0, 00:36:20.730 "keep_alive_timeout_ms": 10000, 00:36:20.730 "arbitration_burst": 0, 00:36:20.730 "low_priority_weight": 0, 00:36:20.730 "medium_priority_weight": 0, 00:36:20.730 "high_priority_weight": 0, 00:36:20.730 "nvme_adminq_poll_period_us": 10000, 00:36:20.730 "nvme_ioq_poll_period_us": 0, 00:36:20.730 "io_queue_requests": 512, 00:36:20.730 "delay_cmd_submit": true, 00:36:20.730 "transport_retry_count": 4, 00:36:20.730 "bdev_retry_count": 3, 00:36:20.730 "transport_ack_timeout": 0, 00:36:20.730 "ctrlr_loss_timeout_sec": 0, 00:36:20.730 "reconnect_delay_sec": 0, 00:36:20.730 "fast_io_fail_timeout_sec": 0, 00:36:20.730 "disable_auto_failback": false, 00:36:20.730 "generate_uuids": false, 00:36:20.730 "transport_tos": 0, 00:36:20.730 "nvme_error_stat": false, 00:36:20.730 "rdma_srq_size": 0, 00:36:20.730 "io_path_stat": false, 00:36:20.730 "allow_accel_sequence": false, 00:36:20.730 "rdma_max_cq_size": 0, 00:36:20.730 "rdma_cm_event_timeout_ms": 0, 00:36:20.730 "dhchap_digests": [ 00:36:20.730 "sha256", 00:36:20.730 "sha384", 00:36:20.730 "sha512" 00:36:20.730 ], 00:36:20.730 "dhchap_dhgroups": [ 00:36:20.730 "null", 00:36:20.730 "ffdhe2048", 00:36:20.730 "ffdhe3072", 00:36:20.730 "ffdhe4096", 00:36:20.730 "ffdhe6144", 00:36:20.730 "ffdhe8192" 00:36:20.730 ] 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "bdev_nvme_attach_controller", 00:36:20.730 "params": { 00:36:20.730 "name": "nvme0", 00:36:20.730 "trtype": "TCP", 00:36:20.730 "adrfam": "IPv4", 00:36:20.730 "traddr": "127.0.0.1", 00:36:20.730 "trsvcid": "4420", 00:36:20.730 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.730 "prchk_reftag": false, 00:36:20.730 "prchk_guard": false, 00:36:20.730 "ctrlr_loss_timeout_sec": 0, 00:36:20.730 "reconnect_delay_sec": 0, 00:36:20.730 "fast_io_fail_timeout_sec": 0, 00:36:20.730 "psk": "key0", 00:36:20.730 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.730 "hdgst": false, 00:36:20.730 "ddgst": false 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "bdev_nvme_set_hotplug", 00:36:20.730 "params": { 00:36:20.730 "period_us": 100000, 00:36:20.730 "enable": false 00:36:20.730 } 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "method": "bdev_wait_for_examine" 00:36:20.730 } 00:36:20.730 ] 00:36:20.730 }, 00:36:20.730 { 00:36:20.730 "subsystem": "nbd", 00:36:20.730 "config": [] 00:36:20.730 } 00:36:20.730 ] 00:36:20.730 }' 00:36:20.730 [2024-07-24 09:21:58.787697] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:36:20.730 [2024-07-24 09:21:58.787792] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953078 ] 00:36:20.730 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.730 [2024-07-24 09:21:58.818797] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:20.991 [2024-07-24 09:21:58.846817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.991 [2024-07-24 09:21:58.936465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.251 [2024-07-24 09:21:59.130174] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:21.816 09:21:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:21.816 09:21:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:21.816 09:21:59 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:21.816 09:21:59 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:21.816 09:21:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.074 09:22:00 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:22.074 09:22:00 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:22.074 09:22:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:22.074 09:22:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.074 09:22:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.074 09:22:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:22.074 09:22:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.331 09:22:00 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:22.331 09:22:00 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:22.331 09:22:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:22.331 09:22:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:22.331 09:22:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:22.331 09:22:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:22.331 09:22:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:22.588 09:22:00 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:22.588 09:22:00 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:22.588 09:22:00 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:22.588 09:22:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:22.845 09:22:00 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:22.845 09:22:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:22.845 09:22:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.TK8triHfkK /tmp/tmp.fkdhISneye 00:36:22.845 09:22:00 keyring_file -- keyring/file.sh@20 -- # killprocess 3953078 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3953078 ']' 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3953078 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3953078 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3953078' 00:36:22.845 killing process with pid 3953078 00:36:22.845 09:22:00 keyring_file -- common/autotest_common.sh@967 -- # kill 3953078 00:36:22.845 Received shutdown signal, test time was about 1.000000 seconds 00:36:22.846 00:36:22.846 Latency(us) 00:36:22.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.846 =================================================================================================================== 00:36:22.846 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:22.846 09:22:00 keyring_file -- common/autotest_common.sh@972 -- # wait 3953078 00:36:23.105 09:22:01 keyring_file -- keyring/file.sh@21 -- # killprocess 3951614 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3951614 ']' 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3951614 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3951614 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3951614' 00:36:23.105 killing process with pid 3951614 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@967 -- # kill 3951614 00:36:23.105 [2024-07-24 09:22:01.035889] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:23.105 09:22:01 keyring_file -- common/autotest_common.sh@972 -- # wait 3951614 00:36:23.363 00:36:23.363 real 0m14.114s 00:36:23.363 user 0m34.918s 00:36:23.363 sys 0m3.327s 00:36:23.363 09:22:01 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:23.363 09:22:01 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:23.363 ************************************ 00:36:23.363 END TEST keyring_file 00:36:23.363 ************************************ 00:36:23.621 09:22:01 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:23.622 09:22:01 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:23.622 09:22:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:23.622 09:22:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:23.622 09:22:01 -- common/autotest_common.sh@10 -- # set +x 00:36:23.622 ************************************ 00:36:23.622 START TEST keyring_linux 00:36:23.622 ************************************ 00:36:23.622 09:22:01 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:23.622 * Looking for test storage... 00:36:23.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:23.622 09:22:01 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:23.622 09:22:01 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:23.622 09:22:01 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:23.622 09:22:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.622 09:22:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.622 09:22:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.622 09:22:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:23.622 09:22:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:23.622 /tmp/:spdk-test:key0 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:23.622 09:22:01 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:23.622 09:22:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:23.622 /tmp/:spdk-test:key1 00:36:23.622 09:22:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3953441 00:36:23.623 09:22:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:23.623 09:22:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3953441 00:36:23.623 09:22:01 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3953441 ']' 00:36:23.623 09:22:01 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.623 09:22:01 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:23.623 09:22:01 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.623 09:22:01 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:23.623 09:22:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:23.623 [2024-07-24 09:22:01.715258] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:36:23.623 [2024-07-24 09:22:01.715340] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953441 ] 00:36:23.882 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.882 [2024-07-24 09:22:01.750882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:23.882 [2024-07-24 09:22:01.777996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.882 [2024-07-24 09:22:01.872153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:24.141 09:22:02 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:24.141 [2024-07-24 09:22:02.125904] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.141 null0 00:36:24.141 [2024-07-24 09:22:02.157967] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:24.141 [2024-07-24 09:22:02.158442] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:24.141 09:22:02 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:24.141 306413434 00:36:24.141 09:22:02 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:24.141 198464149 00:36:24.141 09:22:02 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3953556 00:36:24.141 09:22:02 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:24.141 09:22:02 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3953556 /var/tmp/bperf.sock 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3953556 ']' 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:24.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:24.141 09:22:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:24.141 [2024-07-24 09:22:02.223701] Starting SPDK v24.09-pre git sha1 78cbcfdde / DPDK 24.07.0-rc3 initialization... 00:36:24.141 [2024-07-24 09:22:02.223791] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3953556 ] 00:36:24.141 EAL: No free 2048 kB hugepages reported on node 1 00:36:24.141 [2024-07-24 09:22:02.256211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation. 00:36:24.399 [2024-07-24 09:22:02.284139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.399 [2024-07-24 09:22:02.371445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.399 09:22:02 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:24.399 09:22:02 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:24.399 09:22:02 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:24.399 09:22:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:24.656 09:22:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:24.656 09:22:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:24.913 09:22:03 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:24.913 09:22:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:25.171 [2024-07-24 09:22:03.233114] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:25.429 nvme0n1 00:36:25.429 09:22:03 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:25.429 09:22:03 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:25.429 09:22:03 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:25.429 09:22:03 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:25.429 09:22:03 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:25.429 09:22:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.688 09:22:03 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:25.688 09:22:03 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:25.688 09:22:03 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:25.688 09:22:03 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:25.688 09:22:03 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:25.688 09:22:03 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:25.688 09:22:03 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@25 -- # sn=306413434 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@26 -- # [[ 306413434 == \3\0\6\4\1\3\4\3\4 ]] 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 306413434 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:25.946 09:22:03 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:25.946 Running I/O for 1 seconds... 00:36:26.879 00:36:26.879 Latency(us) 00:36:26.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:26.879 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:26.879 nvme0n1 : 1.02 5708.25 22.30 0.00 0.00 22257.08 12913.02 36117.62 00:36:26.879 =================================================================================================================== 00:36:26.879 Total : 5708.25 22.30 0.00 0.00 22257.08 12913.02 36117.62 00:36:26.879 0 00:36:26.879 09:22:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:26.879 09:22:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:27.139 09:22:05 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:27.139 09:22:05 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:27.139 09:22:05 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:27.139 09:22:05 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:27.139 09:22:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:27.139 09:22:05 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:27.397 09:22:05 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:27.397 09:22:05 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:27.397 09:22:05 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:27.397 09:22:05 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:27.397 09:22:05 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:27.397 09:22:05 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:27.655 [2024-07-24 09:22:05.702155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:27.655 [2024-07-24 09:22:05.702240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5690 (107): Transport endpoint is not connected 00:36:27.655 [2024-07-24 09:22:05.703231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fb5690 (9): Bad file descriptor 00:36:27.655 [2024-07-24 09:22:05.704230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:27.655 [2024-07-24 09:22:05.704250] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:27.655 [2024-07-24 09:22:05.704264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:27.655 request: 00:36:27.655 { 00:36:27.655 "name": "nvme0", 00:36:27.655 "trtype": "tcp", 00:36:27.655 "traddr": "127.0.0.1", 00:36:27.655 "adrfam": "ipv4", 00:36:27.655 "trsvcid": "4420", 00:36:27.655 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:27.655 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:27.655 "prchk_reftag": false, 00:36:27.655 "prchk_guard": false, 00:36:27.655 "hdgst": false, 00:36:27.655 "ddgst": false, 00:36:27.655 "psk": ":spdk-test:key1", 00:36:27.655 "method": "bdev_nvme_attach_controller", 00:36:27.655 "req_id": 1 00:36:27.655 } 00:36:27.655 Got JSON-RPC error response 00:36:27.655 response: 00:36:27.655 { 00:36:27.655 "code": -5, 00:36:27.655 "message": "Input/output error" 00:36:27.655 } 00:36:27.655 09:22:05 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@33 -- # sn=306413434 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 306413434 00:36:27.656 1 links removed 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@33 -- # sn=198464149 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 198464149 00:36:27.656 1 links removed 00:36:27.656 09:22:05 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3953556 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3953556 ']' 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3953556 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3953556 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3953556' 00:36:27.656 killing process with pid 3953556 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@967 -- # kill 3953556 00:36:27.656 Received shutdown signal, test time was about 1.000000 seconds 00:36:27.656 00:36:27.656 Latency(us) 00:36:27.656 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.656 =================================================================================================================== 00:36:27.656 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:27.656 09:22:05 keyring_linux -- common/autotest_common.sh@972 -- # wait 3953556 00:36:27.915 09:22:05 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3953441 00:36:27.915 09:22:05 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3953441 ']' 00:36:27.915 09:22:05 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3953441 00:36:27.915 09:22:05 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:27.915 09:22:05 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:27.915 09:22:05 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3953441 00:36:27.915 09:22:06 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:27.915 09:22:06 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:27.915 09:22:06 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3953441' 00:36:27.915 killing process with pid 3953441 00:36:27.915 09:22:06 keyring_linux -- common/autotest_common.sh@967 -- # kill 3953441 00:36:27.915 09:22:06 keyring_linux -- common/autotest_common.sh@972 -- # wait 3953441 00:36:28.483 00:36:28.483 real 0m4.910s 00:36:28.483 user 0m9.262s 00:36:28.483 sys 0m1.571s 00:36:28.483 09:22:06 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:28.483 09:22:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:28.483 ************************************ 00:36:28.483 END TEST keyring_linux 00:36:28.483 ************************************ 00:36:28.483 09:22:06 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:28.483 09:22:06 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:28.484 09:22:06 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:28.484 09:22:06 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:28.484 09:22:06 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:28.484 09:22:06 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:28.484 09:22:06 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:28.484 09:22:06 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:28.484 09:22:06 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:28.484 09:22:06 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:28.484 09:22:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:28.484 09:22:06 -- common/autotest_common.sh@10 -- # set +x 00:36:28.484 09:22:06 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:28.484 09:22:06 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:36:28.484 09:22:06 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:36:28.484 09:22:06 -- common/autotest_common.sh@10 -- # set +x 00:36:30.385 INFO: APP EXITING 00:36:30.385 INFO: killing all VMs 00:36:30.385 INFO: killing vhost app 00:36:30.385 INFO: EXIT DONE 00:36:31.318 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:31.318 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:31.318 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:31.318 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:31.318 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:31.318 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:31.318 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:31.318 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:31.318 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:36:31.318 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:31.318 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:31.318 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:31.318 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:31.318 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:31.577 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:31.577 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:31.577 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:32.954 Cleaning 00:36:32.954 Removing: /var/run/dpdk/spdk0/config 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:32.954 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:32.954 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:32.954 Removing: /var/run/dpdk/spdk1/config 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:32.954 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:32.954 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:32.954 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:32.954 Removing: /var/run/dpdk/spdk2/config 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:32.954 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:32.954 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:32.954 Removing: /var/run/dpdk/spdk3/config 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:32.954 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:32.954 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:32.954 Removing: /var/run/dpdk/spdk4/config 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:32.954 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:32.954 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:32.954 Removing: /dev/shm/bdev_svc_trace.1 00:36:32.954 Removing: /dev/shm/nvmf_trace.0 00:36:32.954 Removing: /dev/shm/spdk_tgt_trace.pid3633348 00:36:32.954 Removing: /var/run/dpdk/spdk0 00:36:32.954 Removing: /var/run/dpdk/spdk1 00:36:32.954 Removing: /var/run/dpdk/spdk2 00:36:32.954 Removing: /var/run/dpdk/spdk3 00:36:32.954 Removing: /var/run/dpdk/spdk4 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3631811 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3632534 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3633348 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3633787 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3634483 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3634622 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3635330 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3635346 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3635588 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3636798 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3637881 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3638129 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3638432 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3638632 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3638821 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3639138 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3639570 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3639821 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3640132 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3642502 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3642664 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3642826 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3642845 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3643266 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3643281 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3643702 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3643715 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3643998 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3644003 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3644179 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3644303 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3644672 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3644825 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645020 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645186 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645288 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645397 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645566 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645834 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3645993 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3646144 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3646307 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3646579 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3646732 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3646892 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3647167 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3647324 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3647477 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3647690 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3647912 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3648066 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3648225 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3648501 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3648663 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3648817 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3649097 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3649253 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3649332 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3649594 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3651667 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3654217 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3661206 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3661616 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3664118 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3664284 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3666790 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3670607 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3673175 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3679562 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3684770 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3685972 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3686645 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3696856 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3699137 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3752789 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3755951 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3759776 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3763604 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3763606 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3764235 00:36:32.954 Removing: /var/run/dpdk/spdk_pid3764805 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3765455 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3765933 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3765957 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3766210 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3766244 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3766263 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3767408 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3768066 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3768716 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3769100 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3769123 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3769260 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3770141 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3770949 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3776172 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3801241 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3804020 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3805195 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3806390 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3806526 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3806661 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3806796 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3807110 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3808421 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3809051 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3809451 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3811063 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3811438 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3811928 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3814315 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3817671 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3821710 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3844449 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3847831 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3851467 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3852408 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3853491 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3856066 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3858306 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3862503 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3862512 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3865281 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3865413 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3865547 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3865915 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3865939 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3867010 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3868190 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3869365 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3870544 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3871720 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3872900 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3876698 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3877027 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3878958 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3879776 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3883371 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3885337 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3888634 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3891958 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3898170 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3902507 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3902589 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3915434 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3915839 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3916251 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3916776 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3917351 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3917763 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3918163 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3918572 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3920974 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3921208 00:36:32.955 Removing: /var/run/dpdk/spdk_pid3924991 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3925040 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3926772 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3931676 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3931684 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3934475 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3935848 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3937252 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3938103 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3939512 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3940273 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3946163 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3946555 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3946943 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3948499 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3948779 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3949179 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3951614 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3951622 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3953078 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3953441 00:36:33.215 Removing: /var/run/dpdk/spdk_pid3953556 00:36:33.215 Clean 00:36:33.215 09:22:11 -- common/autotest_common.sh@1449 -- # return 0 00:36:33.215 09:22:11 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:33.215 09:22:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:33.215 09:22:11 -- common/autotest_common.sh@10 -- # set +x 00:36:33.215 09:22:11 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:33.215 09:22:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:33.215 09:22:11 -- common/autotest_common.sh@10 -- # set +x 00:36:33.215 09:22:11 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:33.215 09:22:11 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:33.215 09:22:11 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:33.215 09:22:11 -- spdk/autotest.sh@391 -- # hash lcov 00:36:33.215 09:22:11 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:33.215 09:22:11 -- spdk/autotest.sh@393 -- # hostname 00:36:33.215 09:22:11 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:33.524 geninfo: WARNING: invalid characters removed from testname! 00:37:05.616 09:22:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:05.616 09:22:43 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:08.892 09:22:46 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:11.417 09:22:49 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:14.696 09:22:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:17.222 09:22:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:20.505 09:22:58 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:20.505 09:22:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:20.505 09:22:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:20.505 09:22:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:20.505 09:22:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:20.505 09:22:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.505 09:22:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.505 09:22:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.505 09:22:58 -- paths/export.sh@5 -- $ export PATH 00:37:20.505 09:22:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:20.505 09:22:58 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:20.505 09:22:58 -- common/autobuild_common.sh@447 -- $ date +%s 00:37:20.505 09:22:58 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721805778.XXXXXX 00:37:20.505 09:22:58 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721805778.TIaqHI 00:37:20.505 09:22:58 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:37:20.505 09:22:58 -- common/autobuild_common.sh@453 -- $ '[' -n main ']' 00:37:20.505 09:22:58 -- common/autobuild_common.sh@454 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:20.505 09:22:58 -- common/autobuild_common.sh@454 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:20.505 09:22:58 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:20.505 09:22:58 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:20.505 09:22:58 -- common/autobuild_common.sh@463 -- $ get_config_params 00:37:20.505 09:22:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:20.505 09:22:58 -- common/autotest_common.sh@10 -- $ set +x 00:37:20.505 09:22:58 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:20.505 09:22:58 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:37:20.505 09:22:58 -- pm/common@17 -- $ local monitor 00:37:20.505 09:22:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.506 09:22:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.506 09:22:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.506 09:22:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:20.506 09:22:58 -- pm/common@21 -- $ date +%s 00:37:20.506 09:22:58 -- pm/common@21 -- $ date +%s 00:37:20.506 09:22:58 -- pm/common@25 -- $ sleep 1 00:37:20.506 09:22:58 -- pm/common@21 -- $ date +%s 00:37:20.506 09:22:58 -- pm/common@21 -- $ date +%s 00:37:20.506 09:22:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721805778 00:37:20.506 09:22:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721805778 00:37:20.506 09:22:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721805778 00:37:20.506 09:22:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721805778 00:37:20.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721805778_collect-vmstat.pm.log 00:37:20.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721805778_collect-cpu-load.pm.log 00:37:20.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721805778_collect-cpu-temp.pm.log 00:37:20.506 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721805778_collect-bmc-pm.bmc.pm.log 00:37:21.076 09:22:59 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:37:21.076 09:22:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:21.076 09:22:59 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.076 09:22:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:21.076 09:22:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:21.076 09:22:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:21.076 09:22:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:21.076 09:22:59 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:21.076 09:22:59 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:21.076 09:22:59 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:21.337 09:22:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:21.337 09:22:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:21.337 09:22:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:21.337 09:22:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:21.337 09:22:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.337 09:22:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:21.337 09:22:59 -- pm/common@44 -- $ pid=3964696 00:37:21.337 09:22:59 -- pm/common@50 -- $ kill -TERM 3964696 00:37:21.337 09:22:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.337 09:22:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:21.337 09:22:59 -- pm/common@44 -- $ pid=3964698 00:37:21.337 09:22:59 -- pm/common@50 -- $ kill -TERM 3964698 00:37:21.337 09:22:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.337 09:22:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:21.337 09:22:59 -- pm/common@44 -- $ pid=3964700 00:37:21.337 09:22:59 -- pm/common@50 -- $ kill -TERM 3964700 00:37:21.337 09:22:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:21.337 09:22:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:21.337 09:22:59 -- pm/common@44 -- $ pid=3964728 00:37:21.337 09:22:59 -- pm/common@50 -- $ sudo -E kill -TERM 3964728 00:37:21.337 + [[ -n 3532270 ]] 00:37:21.337 + sudo kill 3532270 00:37:21.348 [Pipeline] } 00:37:21.365 [Pipeline] // stage 00:37:21.369 [Pipeline] } 00:37:21.386 [Pipeline] // timeout 00:37:21.391 [Pipeline] } 00:37:21.406 [Pipeline] // catchError 00:37:21.412 [Pipeline] } 00:37:21.429 [Pipeline] // wrap 00:37:21.435 [Pipeline] } 00:37:21.450 [Pipeline] // catchError 00:37:21.459 [Pipeline] stage 00:37:21.461 [Pipeline] { (Epilogue) 00:37:21.477 [Pipeline] catchError 00:37:21.479 [Pipeline] { 00:37:21.493 [Pipeline] echo 00:37:21.494 Cleanup processes 00:37:21.500 [Pipeline] sh 00:37:21.786 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.787 3964829 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:21.787 3964963 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:21.800 [Pipeline] sh 00:37:22.112 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:22.112 ++ grep -v 'sudo pgrep' 00:37:22.112 ++ awk '{print $1}' 00:37:22.112 + sudo kill -9 3964829 00:37:22.123 [Pipeline] sh 00:37:22.406 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:32.387 [Pipeline] sh 00:37:32.674 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:32.674 Artifacts sizes are good 00:37:32.688 [Pipeline] archiveArtifacts 00:37:32.695 Archiving artifacts 00:37:32.927 [Pipeline] sh 00:37:33.214 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:33.229 [Pipeline] cleanWs 00:37:33.239 [WS-CLEANUP] Deleting project workspace... 00:37:33.239 [WS-CLEANUP] Deferred wipeout is used... 00:37:33.246 [WS-CLEANUP] done 00:37:33.248 [Pipeline] } 00:37:33.268 [Pipeline] // catchError 00:37:33.279 [Pipeline] sh 00:37:33.559 + logger -p user.info -t JENKINS-CI 00:37:33.567 [Pipeline] } 00:37:33.584 [Pipeline] // stage 00:37:33.590 [Pipeline] } 00:37:33.607 [Pipeline] // node 00:37:33.613 [Pipeline] End of Pipeline 00:37:33.643 Finished: SUCCESS